More than 4.7 million Australian children and teenagers have been removed from popular social media platforms in the first major test of the nation's landmark ban for users under 16.
Widespread Compliance in Early Data
The sweeping social media prohibition, which took full effect on December 10, 2025, has triggered a massive exodus of young users from apps like Facebook, Instagram, and TikTok. Early compliance data, provided to eSafety Commissioner Julie Inman Grant by the platforms themselves, indicates widespread adherence to the new, strict age verification rules.
This week, Meta revealed it had deactivated accounts for nearly half a million Australian children across its suite of apps—Facebook, Instagram, and Threads—to fall in line with the law. However, the US tech giant voiced criticism of the Albanese government's approach, arguing it lacks a unified, industry-wide standard.
Meta has publicly called for responsibility for verifying ages to be placed on app stores, like those run by Apple and Google, rather than on individual platform companies. In a statement, the company urged the government to "engage with industry constructively to find a better way forward," suggesting incentives for creating safer, age-appropriate online experiences instead of imposing blanket bans.
Government Response and Legal Backbone
Communications Minister Anika Wells acknowledged the rollout would be complex but said the initial figures prove the policy is making a "meaningful difference." She stated the government did not expect "perfection" immediately and confirmed that the eSafety Commissioner is analysing platform-specific data to gauge compliance levels thoroughly.
Prime Minister Anthony Albanese labelled the early cooperation from social media firms as "encouraging," noting their efforts to follow the new laws and exclude underage users.
The legal framework for the ban was passed by parliament in 2024, following bipartisan pressure, notably from the Coalition, aiming to shield young people from harmful content and online risks. The law mandates that platforms including Facebook, Instagram, X, TikTok, YouTube, Discord, and Snapchat take "reasonable steps" to stop children from signing up and to deactivate existing accounts held by under-16s.
How Platforms Are Verifying Ages
Failure to comply can result in staggering fines of up to $50 million. To meet their obligations, companies are deploying a combination of tools, with artificial intelligence playing a key role in estimating users' ages based on their activity and signals.
Guidance from the eSafety Commissioner requires a "layered" approach to age assurance to reduce errors. Crucially, platforms cannot demand sensitive government-issued identification like a driver's licence or passport as the only option. They must provide a "reasonable alternative," such as facial age estimation technology or methods involving parental consent.
eSafety Commissioner Julie Inman Grant had previously cautioned that the implementation would be a gradual process, varying significantly from one platform to another. The early data from the first month provides the first concrete snapshot of this unprecedented digital policy in action.