Few acronyms elicit as buzzy reactions as ‘AI’. Artificial intelligence has entered all our lives and Australia’s policy wonks are no exception. Governments around the world are grappling with how to harness the technology’s immense potential for good and mitigate the risks of its evils.
At the end of 2025, the Australian Government launched its National AI Plan – a plan to grow the AI industry in Australia. The plan also commits the Department of Foreign Affairs and Trade, with the Department of Industry, Science and Resources, to develop an international strategy – specifically a Strategy for International Engagement and Regional Leadership on AI.
The National AI Plan has a goal to ‘spread the benefits’ of AI in Australia. How will the upcoming international strategy seek to spread the benefits across our region? We asked three experts, Australia is drafting an international strategy on AI: so what for development?
If Australia’s emerging international AI strategy is serious about regional leadership, development cooperation can’t be the ‘we’ll circle back’ bit.
There’s a clear ethical case. AI will be a powerful driver of global inequality. Access to computers, data, skills and governance capacity will shape who benefits and who bears the costs. There is also a clear strategic case. AI is already a contested domain between the United States and China, centred on advanced chips and other stuff needed to convert AI into economic and military power.
This competition will increasingly play out in our region. Countries are not just choosing tools. They are choosing AI ecosystems: stacks and standards that will shape how AI is embedded across economies and public sectors. Put plainly, these are choices that will lock in dependencies and alliances for decades to come.
Influence will flow through how AI is adopted and governed. Access to low-resource, deployment-ready models will shape who can adopt it at all. And standards that tend to follow will be shaped quietly through technical assistance and regulatory support.
But Australia can’t outcompete on models or scale. So, if the Government wants to support resilience rather than dependency, it will have to understand that its comparative advantage lies elsewhere.
Drawing on the core development principle of country-owned and country-led, we should support partners to build the capability to choose, adapt and govern AI systems on their own terms. It is slower, less visible work, but it is precisely how Australia can position itself as a trusted AI partner in the region and something our development program is adept at advancing. This would serve regional equity and stability. And in turn, would serve Australia’s interests just as clearly.
Geordie Fung is the Director of Analysis at the Development Intelligence Lab. He is an experienced development practitioner with expertise in development strategy and evaluation, and worked as First Secretary for Development at the Australian Embassy in Timor-Leste. At the Lab, we love Geordie's ability to dream and scheme, and to think through the possibilities of big policy change.
When I read the "National AI Plan" (the Plan) for Australia, I did so against the backdrop of strategies emerging across our region. Malaysia aspires to become an "AI Nation" by 2030, which includes regional cooperation, economic growth and inclusive opportunities. Singapore is cultivating "peaks of excellence" in high-impact sectors like health, finance, and sustainability. The Bangladesh National Strategy for AI explicitly positions AI as a driver of national human development.
By comparison, the opening line of the Australian Plan is strikingly narrow: "...to grow the AI industry in Australia." Growth to what end? Productivity alone? A little more speed and efficiency? Why can't AI be framed in Australia, as with other countries, as a transformative enabler aligned to broader national ambitions or strategic outcomes?
Stronger signals are needed for how the Australian Government intends to align AI to national and regional objectives - like critical infrastructure, energy and food security, improved livelihoods and social cohesion. Stronger signals are also needed to incentivise measurably humane outcomes from government funded programs and infrastructure. If strategic and humane outcomes from AI efforts are prioritised domestically, they are more likely to also shape approaches to AI in Australia’s international development approach.
The Strategy on International Engagement and Regional Leadership provides an opportunity to focus on human development from AI investment and to encourage regional cooperation to activate strategic benefits from AI while mitigating regional risks. This matters, because AI can entrench inequity and discrimination, but it can also accelerate human development and public sector innovation across the Indo-Pacific, as has been well articulated in the 2025 UNDP human Development Report.
Australia’s AI Plan reads a little like an optimistic hail mary: assuming the cultivation of fertile ground will naturally produce a bumper crop of benefits, without any clear vision or plan for what Australians – and our regional partners – want or need to eat. Luckily, the Strategy provides an opportunity to remedy this.
Pia Andrews is a public sector transformer and reformer, with a passion for digital innovation and transformation efforts across the Indo-Pacific region. She’s worked inside the machine of government and in the private sector, to promote pragmatic, continuous innovation, greater transparency and trust. At the Lab, we love Pia’s passion for realistic, humane forward-looking solutions to 21st century problems.
Marc Carney’s blunt Davos warning—that middle powers must form pragmatic, values‑aligned coalitions in an era of hard power—offers a sharp lens on Australia’s National AI Plan. AI is a systemic rupture. Infrastructure, data, models and interfaces—the full “AI stack”—must be treated as an integrated strategic asset.
The Plan gets infrastructure about right, gestures at Australian data, and essentially goes quiet on model development. That gap matters. Models undertake the inference that shapes decisions; their behaviour is a function of training data, architecture and guardrails. As tasks grow in complexity and require increasing qualitative judgment, cultural values and biases embedded in models, via data, manifest in their responses.
If Australia ignores model stewardship and regional data partnerships, we cede influence to external actors whose priorities and values will not align with ours or our near neighbours’. And values matter! For the Indo‑Pacific this is not abstract: development, public services and crisis response will be mediated by models trained elsewhere unless we invest locally.
Australia needs a bolder, operational plan: fund domestic and regional model development; build interoperable data governance and shared feature/embedding infrastructure with Indo‑Pacific partners; resource capacity building and model auditing; and lead standards coalitions that protect regional values. Strategic autonomy here is not isolationism but capability and coalition‑building. Without a concrete commitment across the whole AI stack—and explicit inclusion of our neighbours—the Plan risks delivering infrastructure without influence and promises without protection for Australia. For our smaller near neighbours, facing the existential threat of the AI rupture unsupported, it offers nothing.
Michael ‘Spike’ Barlow is a Professor at the School of Systems & Computing at UNSW. He has written over 20 Journal Articles, 10 book chapters and 100 conference publications, spanning topics like AI & Multi-Agent systems, machine learning, and human-computer interaction. At the Lab, we love Michael’s passion for innovation and focus on how his educational, research, and leadership activities can positively influence his students and society.