What online platforms need to know about eSafety's upcoming regulatory guidance 5 min read
On 10 December 2025, the Government's social media minimum age restrictions will take effect. Against this backdrop, on 1 September, the Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts released a report covering the findings of the Age Assurance Technology Trial (the Trial).
Then, on 4 September, eSafety published summaries of its community consultation on the implementation of the social media age restrictions (the Consultation), as well as a self-assessment tool to help online services determine if they are likely to be an age-restricted social media platform. eSafety also foreshadowed regulatory guidance on the social media age restrictions would be 'released shortly'.
In this Insight, we delve into the details of the Consultation outcomes and the Trial—most notably, how the Consultation, the Trial and eSafety's recent comments all point toward eSafety ultimately recommending a layered or 'waterfall approach' to age assurance.
Consultation outcomes: key takeaways
Between June and August 2025, eSafety held consultations with over 160 organisations to inform its approach to implementation of the social media age restrictions. Below are some of the key concerns/recommendations noted in eSafety's consultation summaries.
- Online service providers emphasised the need for principles-based (rather than prescriptive) regulatory guidance, supported a 'layered/waterfall' approach to assurance, noted that assurance should be proportionate to the risks posed by particular features, and sought greater clarity on compliance expectations and enforcement.
- Age assurance vendors discussed the importance of balancing privacy with protection, inclusivity and preventing excessive data collection. They noted the challenges in conducting age assurance among 16 to 17 year olds in particular (due to limited access to formal ID), and flagged the risks of circumvention (eg through VPNs).
- Subject matter experts placed emphasis on privacy, equity and effectiveness in age assurance, with support for privacy-by-design approaches. They commented on the inevitability of circumvention and the need for platforms to take reasonable steps to combat it. They emphasised that age assurance should be integrated across the user journey and not just at account creation.
- International regulators and government representatives also recommended a principles-based framework, as well as highlighting the importance of considering multiple points in the user journey for age assurance.
- Civil society and academia recommended a principles-based framework, flagged concerns about migration to less safe platforms and disconnection from social networks, and commented on the importance of broader systemic reform beyond age assurance and digital literacy programs.
- Online safety educators supported the shifting of responsibility to platforms, recommended approaches for communication with young people / parents / schools, and raised concerns about data handling during age assurance.
- Children / young people raised concerns around accuracy, inclusion, privacy and support regarding implementation.
- Parents/carers raised concerns around accuracy, privacy, social impact, greater digital literacy education and support.
eSafety has foreshadowed that regulatory guidance will be released shortly, and flagged that it expects efficacy will require layered safety measures, sometimes known as a 'waterfall approach' (see further information on the Trial's findings below regarding waterfall approaches).
The Trial
On 1 September, the Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts released a report covering the findings of the Age Assurance Technology Trial. The Trial is the first Australian initiative evaluating the real-world performance, reliability and privacy impacts of age assurance technologies.
The Trial tested over 60 different technologies from 48 vendors across social media, gaming, adult content and online retail sectors, to understand if age assurance can be performed without compromising the privacy and security of Australian citizens. We have summarised key takeaways from the report below.
Key takeaways
Age assurance can be done
The Trial found that age assurance systems can be private, robust and effective in Australia, when carefully chosen and implemented. Notably, no substantial technological limitations were identified that would prevent age verification systems from meeting policy or regulatory requirements in the Australian context.
No 'one size fits all' approach
The Trial concluded that there is no single solution to age assurance, and emphasised the importance of deploying solutions that are tailored to particular sectors and risk-profiles. Reflecting on particular approaches, the report notes that:
- Age verification (verifying age against authoritative records of a user's DOB) demonstrated high accuracy. Most systems assessed during the Trial were also designed to avoid long-term storage of identity or biometric information, reducing concerns about data security.
- Age estimation (a method of determining an individual’s likely age by analysing physical or behavioural characteristics using AI or machine learning models) was found to perform at a high level of accuracy, especially when applied outside threshold 'buffer zones' (eg 13+, 16+, 18+). It was noted that consistency in performance across demographics is improving but requires continued focus.
- Age inference (a method of determining an individual’s likely age based on verifiable contextual, behavioural, transactional or environmental signals, rather than biometric data or identity documents) was most accurate when grounded in clearly modelled reasoning and when drawing from well-labelled behavioural signals. However, the Trial also identified concerns where inference becomes persistent, particularly in account-based environments, as continuous behavioural monitoring may lead to digital profiling (undermining user autonomy).
Ability to manage trade-offs using Successive Validation (aka 'waterfall approach')
The Trial found that 'successive validation'—a layered approach that combines multiple methods such as age inference, age estimation and age verification—was highly effective at reaching an accurate, risk-appropriate age-related decision.
This model allows services to manage trade-offs by using the lightest effective method where possible. For example, by starting with a low-friction method (such as inference) and only escalating to more involved methods (such as biometric estimation or documentary validation) if results are uncertain.
The report also noted that successive validation could be useful from an inclusivity perspective, by enabling alternative pathways for users who may not have formal identification.
Limitations to parental control systems
The Trial found that parental control systems can be effectively and securely applied across Australian platforms. However, some areas for refinement were identified, including making parental control systems more adaptable to reflect a child's evolving maturity, preferences or rights to participate in decisions about their digital lives.
The Trial also found that parental control signals should not be treated as verified age data, as they provide useful contextual input but do not meet the assurance standards required for regulatory compliance.
Privacy and cybersecurity
The Trial found that cybersecurity practices (both in terms of security and fraud resilience) were generally strong across the systems assessed. Many providers were found to be addressing known threats such as forgeries and AI-generated fakes, although areas for improvement were noted (such as improved security against injection attacks, and an ability to check documents against government databases to determine if they have been reported lost/stolen).
A strong commitment to privacy-by-design principles was identified, with most participants demonstrating robust internal policies on handling personal information, including clear practices for data collection, storage, sharing and disposal. However, the report notes that the rapidly evolving threat environment underscores the need for ongoing monitoring and improvement to maintain effectiveness. Similarly, it was found that continued attention to privacy compliance will support long-term trust and accountability.
Interestingly, it was noted that providers were over-anticipating an eventual requirement from regulators to provide personal information for investigations, and in some cases were building tools to enable regulators to retrace actions taken by individuals to verify their age (potentially leading to disproportionate collection and retention of data).
The report also indicates that many of the providers were aligning their systems to international standards for age assurance (such as ISO/IEC 27566, the IEEE 2089.13 and ISO/IEC 25000), reflecting an industry recognition of the importance of conformity for purposes of certification.
Scope for technological improvement
While the Trial found that age assurance can be done with no major technical barriers, some opportunities for technological improvement were identified. Most notably, the report identifies scope to enhance usability, risk management and system interoperability. Features such as one-way blind access APIs to government documents were identified as methods that could enhance exact age verification for children. Better handling of children's digital footprints was also identified as an opportunity for technological improvement.
While the report notes that providers are focused on innovating to develop technologies that are accurate, prioritise privacy and reduce user friction (eg cryptographic methods and reusable verified credentials), it was noted that these are often at a lower technology readiness level.
If you have any questions or if it would be helpful to discuss further, please contact our experts below.