Australia’s internet regulator has criticised the world’s largest social media companies of not adequately implementing the country’s prohibition preventing under-16s from accessing their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including permitting prohibited users to make repeated attempts at age verification and inadequate safeguards to stop new account creation. In its initial compliance assessment since the ban took effect, the regulator found numerous deficiencies and has now moved from monitoring to active enforcement, warning that platforms must show they have put in place “appropriate systems and processes” to stop under-16s from using their services.
Regulatory Breaches Revealed in Opening Large-scale Review
Australia’s eSafety Commissioner has documented a concerning pattern of failure to comply among the world’s largest social media platforms in her first formal review since the ban came into effect on 10 December. The report reveals that Meta, Snap, TikTok, YouTube and Snapchat have collectively failed to implement adequate safeguards to prevent minors from accessing their services. Julie Inman Grant expressed particular concern about systemic weaknesses in age verification processes, highlighting that some platforms have permitted children who initially declared themselves under 16 to subsequently claim they were older, thereby undermining the law’s intent.
The findings demonstrate a notable intensification in the regulatory response, with the eSafety Commissioner moving beyond monitoring to active enforcement. The regulator has stressed that merely demonstrating some children still hold accounts is inadequate; platforms must rather provide concrete evidence that they have put in place comprehensive systems and procedures intended to stop under-16s from opening accounts in the first place. This shift demonstrates the government’s commitment to ensure tech giants responsible, with potential penalties looming for companies that fail to meet the statutory obligations.
- Allowing previously banned users to re-verify their age and restore account access
- Allowing repeated attempts at the identical verification process with no repercussions
- Insufficient mechanisms to stop new under-16 accounts from being created
- Limited reporting tools for families and the wider community
- Lack of transparent data about compliance actions and account removals
The Magnitude of the Challenge
The considerable scale of social media activity amongst young Australians underscores the regulatory challenge confronting both the government and the platforms themselves. With numerous accounts already restricted or removed since the ban’s implementation, the figures paint a picture of widespread initial non-compliance. The eSafety Commissioner’s conclusions suggest that the technical and procedural obstacles to implementing age restrictions have turned out to be considerably more complex than expected, with platforms struggling to distinguish genuine age declarations from fraudulent ones. This complexity has placed enforcement authorities wrestling with the fundamental question of whether current age verification technologies are sufficient for the purpose.
Beyond the technical obstacles lies a wider issue about the willingness of platforms to prioritise compliance over user growth. Social media companies have long resisted strict identity verification requirements, citing data protection worries and the genuine difficulty of confirming age online. However, the regulatory report suggests that some platforms may not be making adequate commitment to deploy the infrastructure mandated legally. The move to active enforcement represents a pivotal moment: either platforms will substantially upgrade their regulatory systems, or they stand to incur significant penalties that could reshape their business models in Australia and potentially influence regulatory approaches internationally.
What the Figures Indicate
In the first month after the ban’s launch, Australian authorities indicated that 4.7 million accounts had been limited or removed. Whilst this statistic initially seemed to demonstrate enforcement effectiveness, subsequent analysis reveals a more layered picture. The substantial number of account deletions suggests that many under-16s had managed to establish accounts in the first place, demonstrating that preventative measures were insufficient. Additionally, the data raises questions about whether removed accounts reflect genuine enforcement or just users deleting their profiles of their own accord in response to the new restrictions.
The limited transparency surrounding these figures has troubled independent observers seeking to assess the ban’s actual effectiveness. Platforms have revealed minimal information about their compliance procedures, success rates, or the profile of deleted profiles. This opacity makes it difficult for regulators and the general public to determine whether the ban is working as intended or whether younger users are simply finding alternative ways to access social media. The Commissioner’s push for comprehensive proof of consistent enforcement practices reflects mounting dissatisfaction with platforms’ resistance to disclosing full information.
Industry Response and Opposition
The social media giants have responded to the regulator’s enforcement action with a mixture of compliance assurances and scepticism about the practical feasibility of the ban. Meta, which operates Facebook and Instagram, stressed its commitment to complying with Australian law whilst at the same time contending that precise age verification continues to be a significant industry-wide challenge. The company has called for a alternative strategy, suggesting that robust age verification and parental approval mechanisms implemented at the application store level would be more effective than enforcement at the platform level. This position demonstrates broader industry concerns that the existing regulatory system puts an impractical burden on separate platforms.
Snap, the creator of Snapchat, has adopted a more assertive public position, stating that it had locked 450,000 accounts following the ban’s implementation and claiming to continue locking more daily. However, industry observers dispute whether such figures demonstrate genuine compliance or merely reactive account management. The core conflict between platforms’ commercial structures—which traditionally depended on maximising user engagement and expansion—and the statutory obligation to systematically remove an entire age demographic persists unaddressed. Companies have consistently opposed stringent age verification, citing privacy issues and technical constraints, establishing an impasse between regulators and platforms over who bears responsibility for implementation.
- Meta argues age verification ought to take place at app store level rather than on individual platforms
- Snap asserts to have locked 450,000 accounts following the ban’s implementation in December
- Industry groups point to privacy concerns and technical obstacles as barriers to effective age verification
- Platforms contend they are making their best effort whilst challenging the ban’s overall effectiveness
Larger Inquiries Regarding the Prohibition’s Efficacy
As Australia’s under-16 online platform ban enters its enforcement phase, fundamental questions remain about whether the legislation will accomplish its stated objectives or merely drive young users towards unregulated platforms. The regulatory authority’s first compliance report reveals that following implementation, significant loopholes exist—children keep discovering ways to circumvent age verification mechanisms, and platforms have struggled to stop new underage accounts from being established. Critics contend that the ban’s effectiveness depends not merely on regulatory oversight but on whether young people will genuinely abandon mainstream platforms or simply shift towards other platforms, secure messaging apps, or virtual private networks designed to mask their age and location.
The ban’s worldwide effects contribute further complexity to assessments of its effectiveness. Countries including the United Kingdom, Canada, and several European nations are watching Australia’s experiment closely, considering similar legislation for their own citizens. If the ban fails to reduce children’s online activity or does not protect them from dangerous online content, it could undermine the case for comparable regulations elsewhere. Conversely, if implementation proves sufficiently strict to genuinely restrict underage access, it may encourage other nations to adopt comparable measures. The result will likely influence global regulatory trends for many years ahead, making Australia’s regulatory efforts analysed far beyond its borders.
Who Gains and Who Loses
Mental health campaigners and child safety organisations have endorsed the ban as a essential measure against algorithmic manipulation and contact with harmful content. Parents and educators maintain that taking young Australians off platforms designed to maximise engagement could reduce anxiety, improve sleep patterns, and decrease exposure to cyberbullying. Tech companies’ own research has acknowledged the risks to mental health associated with social media use amongst adolescents, adding weight to these concerns. However, the ban also removes legitimate uses of social media for young people—maintaining friendships, obtaining educational material, and participating in online communities around common interests. The regulatory framework assumes harm outweighs benefit, a calculation that some young people and their families dispute.
The ban’s practical impact goes further than individual users to impact content creators, small businesses, and community organisations reliant on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now encounter legal barriers to participation. Small Australian businesses that rely on social media marketing no longer reach younger demographic audiences. Community groups, charities, and educational organisations find it difficult to engage young people through channels they previously utilised effectively. Meanwhile, the ban unintentionally advantages large technology companies with resources to develop age verification infrastructure, possibly reinforcing their market dominance rather than reducing it. These unforeseen effects suggest the ban’s effects reach well further than the simple goal of child protection.
What Follows for Regulatory Action
Australia’s eSafety Commissioner has indicated a marked change from inactive oversight to direct intervention, marking a key milestone in the rollout of the under-16 ban. The regulator will now compile information to determine whether services have neglected to implement “reasonable steps” to prevent underage access, a regulatory requirement that goes further than simply recording that minors continue using these platforms. This approach demands demonstrable proof that companies have established appropriate systems and procedures meant to keep out minors. The enforcement team has indicated it will conduct enquiries systematically, constructing evidence that could trigger considerable sanctions for non-compliance. This transition from monitoring to enforcement demonstrates mounting concern with the services’ existing measures and indicates that willing participation by itself is insufficient.
The implementation stage raises important questions about the adequacy of penalties and the practical mechanisms for maintaining corporate responsibility. Australia’s statutory provisions provides regulatory tools, but their effectiveness hinges on the eSafety Commissioner’s commitment to initiate official proceedings and the platforms’ capacity to respond effectively. International observers, especially regulators in the Britain and Europe, will closely monitor Australia’s implementation tactics and results. A successful enforcement campaign could create a model for other nations evaluating similar bans, whilst inadequate results might compromise the comprehensive regulatory system. The next phase will determine whether Australia’s groundbreaking legislation produces substantive defence for young people or stays primarily ceremonial in its effect.
