Australia's National AI Plan Criticised for Overlooking Risks and Safeguards
National AI Plan criticised for weak safeguards

Australia's newly released National AI Plan has arrived at a pivotal moment for the nation's technological future, but a leading expert warns it dangerously misjudges the balance between innovation and public protection. Unveiled by the federal government, the framework aims to position Australia for long-term prosperity in the age of artificial intelligence. However, Professor Uri Gal from the University of Sydney Business School argues the plan leans too heavily on uncertain economic promises while offering insufficient certainty on vital safeguards.

Missing Mandatory Safeguards and a Toothless Institute

Over the past two years, the government had signalled its intention to introduce mandatory requirements for AI systems that could impact people's rights or wellbeing. These were expected to apply to high-stakes sectors including recruitment, healthcare, financial services, policing, and education. None of these firm commitments appear in the final plan published on December 6, 2025.

Instead, the plan's centrepiece is an AI Safety Institute tasked with studying risks and advising existing regulators. Crucially, this body will not have the authority to set or enforce cross-sector rules. The strategy assumes current legal frameworks can be adapted to address harms after they occur, urging regulators to interpret their existing powers. Professor Gal contends this reactive approach leaves the public exposed where laws are ambiguous or silent on novel AI challenges.

Critical Transparency Gaps and Risks to the Vulnerable

The plan's shortcomings are highlighted in several key areas of public interaction. While it encourages transparency, it does not mandate that organisations inform people when they are interacting with an automated agent rather than a human. This omission erodes informed consent, as individuals may take guidance, reassurance, or financial advice from entities they believe are human, increasing risks of manipulation.

Furthermore, the plan offers no specific limits on AI companion apps targeting children or teenagers. This is particularly striking given the government's imminent move to strengthen restrictions on social media access for young people. Such AI companions can foster unhealthy emotional dependence by acting as attentive conversational partners while collecting sensitive data from minors. They may also reinforce harmful beliefs or encourage compulsive use if designed to reward constant interaction.

Overly Optimistic Economic Forecasts Lack Empirical Support

While downplaying consumer risks, the plan presents a highly optimistic view of AI-driven productivity growth. It promotes the idea that rapid business adoption will lift national performance, delivering broad improvements in output, wages, and competitiveness. However, it provides limited empirical support for these beliefs, focusing instead on investment numbers, data centre expansion, and projected skills demand.

Historical precedent from previous technological shifts, like the spread of computers and the internet, suggests major benefits appear gradually and unevenly. Early investment often yielded modest results until organisations fully adapted their processes and culture. Current evidence from nations with fast AI adoption shows a similar pattern: while many workers are exposed to generative AI tools, substantive use is limited and the aggregate productivity effect remains minimal.

The National AI Plan largely overlooks the risk that increased automation could intensify pressure on workers, widen inequality, or concentrate economic gains among a small group of leading firms. It gives scant attention to the possibility that benefits may be narrow, slow, or smaller than projected.

Calls for a More Balanced and Realistic National Strategy

Professor Gal advocates for a more credible and balanced strategy. A robust plan would acknowledge that high-risk AI systems require enforceable oversight and restore firm requirements for transparency, accountability, and safety where AI affects opportunities, wellbeing, or legal rights. It would also recognise that productivity gains are likely to emerge gradually and vary across sectors.

A more realistic approach would place public protection at the core of national policy, rather than hopeful predictions or industry promises. It would also acknowledge the clear commercial interests behind many optimistic claims, as firms developing advanced AI have strong incentives to influence regulation in ways that accelerate adoption without corresponding obligations.

AI will undoubtedly shape Australia's economic and social future. The critical question is not whether to pursue these opportunities, but how to ensure progress does not come at the expense of public safety. A national plan that overestimates economic upside while weakening safeguards is not the foundation Australia needs. A better plan would couple innovation with responsibility, ensuring people remain the primary beneficiaries of technological change.