Key themes and actions for platform providers and their technology partners 13 min read
From 10 December 2025, age-restricted social media platforms must take 'reasonable steps' to prevent users under the age of 16 from having accounts. Although the social media minimum age (SMMA) regime sits within the Online Safety Act and will be primarily enforced by the eSafety Commissioner (eSafety), the Office of the Australian Information Commissioner (OAIC) will play a crucial role in regulating privacy compliance.
In this Insight, we explore key themes from the OAIC's recent guidance and outline action items for platform providers and their technology partners preparing for the new regime.
Key takeaways
- Interplay between privacy and online safety laws: compliance will require ensuring that age assurance methods are effective but also minimise intrusion into user privacy. Both eSafety and the OAIC will play an enforcement role. Although the OAIC's guidance is not binding, it is a strong indication of how the OAIC will seek to enforce the privacy compliance aspects of the SMMA regime.1
- Privacy by design approach: platform providers must embed privacy considerations, including by conducting privacy impact assessments when selecting age assurance methods.
- Data minimisation is key: data collection must be limited to what is necessary for SMMA compliance. Entities are encouraged to use pre-existing data or low-intrusion methods wherever possible.
- High-risk practices require caution: more intrusive methods such as biometric analysis should be used only when absolutely necessary and be accompanied by robust safeguards.
- Consent for secondary uses: unambiguous consent is required for any secondary use or disclosure of personal information collected for SMMA compliance purposes.
- Destruction obligations go beyond Australian Privacy Principles (APPs): Personal information collected specifically for SMMA compliance (inputs) must be destroyed immediately after use. Retention of outputs (such as 16+ yes/no tokens) must be short-lived and this data must be ring-fenced.
- Using pre-existing data: the use of pre-existing user data for SMMA purposes is subject to the APPs. Any new record generated from pre-existing data will be subject to the specific purpose limitation and destruction requirements of the SMMA regime (which apply alongside the APPs).
- Continuous improvement required: age assurance measures should evolve proactively alongside changes in platform features and user behaviour.
Actions you can take now
The SMMA regime explained
Overview of relevant legislation, rules and guidance.
SMMA regime to commence 10 December 2025
The SMMA regime will come into effect on 10 December, requiring social media platform providers to take 'reasonable steps' to prevent Australians under the age of 16 (age-restricted users) from having accounts on their platforms. This follows amendments to the Online Safety Act in late 2024 introducing the SMMA framework under Part 4A.
The Online Safety Act does not prescribe specific methods for how to ensure users meet minimum age requirements. Rather, platform providers must implement 'reasonable steps' tailored to their context.3 eSafety's guidance sets out baseline expectations around reliability, accuracy, robustness and effectiveness in using age assurance to assess user age. These expectations mirror the findings of the community consultation on the implementation of the SMMA regime conducted by eSafety and the Age Assurance Technology Trial, which we explore in our Insight.
This 'reasonable steps' requirement applies to both existing accounts and new accounts. This means platform providers will need to determine whether existing accounts on their platforms are held by age-restricted users and deactivate or remove those accounts, as well as prevent age-restricted users from creating new accounts.
Who does the regime apply to?
The SMMA requirement applies to providers of 'age-restricted social media platforms' (platform providers), which are defined as services that meet the following conditions:
- the sole purpose, or a significant purpose, of the service is to enable online social interaction between two or more end-users;
- the service allows end-users to link to, or interact with, some or all of the other end-users; and
- the service allows end-users to post material on the service.
This definition is broad and will encompass most social media platforms.
The Online Safety (Age-Restricted Social Media Platforms) Rules 2025 issued under the Online Safety Act specify the types of online services that are not covered by the SMMA. These are services with the sole or primary purpose of:
- messaging, email, voice calling or video calling (eg WhatsApp)
- online gaming
- allowing end-users to share information about products or services (eg reviews, technical support or advice)
- professional networking and professional development (eg LinkedIn)
- supporting education and health or facilitating associated communications.
eSafety guidance on 'reasonable steps'
In September 2025, eSafety issued guidance on the 'reasonable steps' platform providers are expected to take to prevent age-restricted users from having online accounts. This involves, at a minimum, assessing user age via 'age assurance' mechanisms—defined broadly as processes used to verify or infer age. Age assurance may be undertaken by platform providers themselves or outsourced to contracted vendors—which are referred to as third-party age assurance providers.
The guidance also sets out eSafety's principles-based approach to the SMMA restrictions, emphasising that:
- when considering whether a platform provider has taken reasonable steps, eSafety will use a holistic approach and will not evaluate measures in isolation;
- the SMMA rules do not prescribe specific types of age assurance methods or propose a minimum accuracy level for these methods. Instead, the guidance provides an overview of different age assurance methods to assist platform providers in determining what is technically feasible and practicably implementable; and
- platform providers do not need to use methods or vendors that were involved with the Age Assurance Technology Trial and can use internal methods or rely on external vendors (but are expected to conduct appropriate due diligence on external vendors).
The guidance also provides guardrails to assist platform providers in developing their compliance measures, clarifying what is strictly not prescribed under the SMMA rules and what measures will not be considered reasonable steps. Some key examples of these guardrails are below.
Platform providers should:
- create pathways for potential underage accounts to be reported;
- prevent re-registration or circumvention for accounts that have been removed or deactivated; and
- provide review mechanisms for users who may have been incorrectly flagged in age assurance.
Platform providers should not:
- verify the age of all users on the platform;
- rely on singular, rather than multiple, age-related signals to infer age; or
- use government ID as the sole method for age assurance.
Privacy and the SMMA regime
Recent OAIC guidance
On 9 October 2025, the OAIC released guidance setting out its expectations regarding privacy compliance for both platform providers and third-party age assurance providers handling personal information for age assurance purposes in the SMMA context. The OAIC guidance sheds light on how these entities should comply with their obligations under the Privacy Act 1988 (Cth) (Privacy Act) when taking the 'reasonable steps' required for SMMA compliance purposes under the Online Safety Act.
Together, the guidance issued by both eSafety and the OAIC reflects a regulatory approach that attempts to strike a balance between protecting young people from harms associated with social media use, while emphasising privacy and proportionality.
The relationship between the Privacy Act and the Online Safety Act
Part 4A of the Online Safety Act operates alongside the Privacy Act, introducing stricter obligations on platform providers and third-party age assurance providers while handling personal information for SMMA compliance purposes.
This means the regulators for Online Safety Act and the Privacy Act—eSafety and the OAIC respectively—play different, but complementary enforcement roles in the SMMA context.
Part 4A of the Online Safety Act (s63F) imposes the following data-handling requirements on platform providers and third-party age assurance providers—these apply in addition to the Privacy Act more generally:
- personal information collected for age assurance cannot be repurposed (ie used or disclosed for secondary purposes) without unambiguous consent (outside standard APP 6 exceptions, such as where use or disclosure is required for law enforcement) (see Purpose limitation – unambiguous consent for secondary uses); and
- personal information collected for age assurance must be destroyed after it is handled for that purpose (see Information destruction – 'destruction-on-decision').
Failure to comply with the Part 4A privacy obligations is an interference with the privacy of an individual for the purposes of the Privacy Act. This means:
- non-compliance with these obligations is within the remit of the enforcement powers of the Information Commissioner under the Privacy Act; and
- individuals are entitled to complain to the Commissioner about alleged contraventions of these obligations.
Steps to comply with SMMA obligations will also not be 'reasonable' unless an entity also complies with its Privacy Act obligations. eSafety is responsible for enforcing the 'reasonable steps' obligation under the Online Safety Act.
The OAIC’s guidance complements eSafety's guidance by outlining how it expects entities to align their privacy practices with these technical measures. The guidance categorises personal information used for SMMA purposes into three types:
- Inputs: uploaded documents or selfies provided by users for age assurance purposes.
- Outputs: the outcome of the age assurance process—eg a binary yes/no token confirming whether a user is over 16.
- Existing data: metadata or other pre-existing records used to infer age.
Each category requires strict controls on collection, use, disclosure, storage and destruction.
OAIC's key themes
Privacy by design and continuous improvement
The OAIC encourages a privacy by design approach when selecting age assurance methods, emphasising the importance of privacy impact assessments. The OAIC is clear that compliance with the SMMA regime may increase data breach risk—data security must be the priority, particularly when handling sensitive information in the form of biometric data. The OAIC states that entities should build and maintain their age assurance practices so that quality (APP 10) and security and retention limitations (APP 11) are enforced by design.
In addition, the OAIC reiterates eSafety's guidance that the measures taken by platform providers to comply with the SMMA regime should not be static—providers should 'proactively monitor and respond to changes in their platforms' features, functions and end-user practices'. eSafety also expects platforms to take proactive steps to detect accounts held by age-restricted users on an ongoing basis.
Data minimisation
The OAIC is clear that platform providers and third-party age-assurance providers must limit their collection to what is actually necessary for compliance with the SMMA regime. Otherwise, entities risk breaching the APP 3 requirement that collection is 'reasonably necessary' for their functions or activities. The OAIC acknowledges that assessing what data is 'necessary' in the circumstances involves weighing competing interests but emphasises data minimisation as key. The OAIC recommends, for example, that entities:
- use pre-existing data for age assurance where possible (rather than collecting new data) (see Using existing data for SMMA purposes);
- collect only binary outcomes—like yes/no tokens—rather than date of birth or exact age;
- if scanning documents, analyse the date of birth only and redact/avoid other fields;
- use tech solutions that temporarily process personal information inputs and do not retain them (noting however that even transient storage will constitute a collection of personal information under the Privacy Act);
- strictly adhere to data destruction requirements (see Information destruction – 'destruction-on-decision'); and
- when choosing or offering an age assurance method (or combination of methods), consider layered or 'waterfall' approaches that start with low-intrusion techniques (such as using non-sensitive, pre-existing data) and escalate to more intrusive techniques (such as requiring a government-issued ID, accredited digital ID upload or use of biometrics) only if necessary.
Purpose limitation: unambiguous consent for secondary uses
Part 4A of the Online Safety Act provides that personal information collected for age assurance cannot be repurposed (ie used or disclosed for secondary purposes) without unambiguous consent (outside standard APP 6 exceptions, such as where use or disclosure is required for law enforcement).
The Guidance gives the example of a platform provider allowing a user to consent to the platform provider sharing an output (eg a 16+ token) with a third party to allow the user to sign up to that third party's service. The OAIC is clear that consent to any secondary uses or disclosures cannot be achieved through pre-selected settings or an opt-out approach—a separate, dedicated consent flow is required. Further, the OAIC's view is that unambiguous consent requires the user to have the ability to withdraw consent—in the example given, this would require both parties to delete the token from their systems upon the withdrawal of consent.
Information destruction: 'destruction-on-decision'
Part 4A of the Online Safety Act requires platform providers and third-party age assurance providers to destroy personal information collected for SMMA purposes after handling it for that purpose. This is a far stricter obligation than APP 11.2 which allows for:
- possible retention if there are other potential business use cases for the data; and
- de-identification as an alternative to destruction.
In particular, the OAIC has stressed that inputs (such as document images, OCR text, selfies, liveness videos or other biometric information or templates used for a point-in-time age check) must be destroyed immediately following the age assurance check, including caches and storage—the OAIC sees inputs as highest risk. Outputs or 'decision artefacts' (such as binary outcomes 16+ yes/no, timestamps and tokens) are seen as lower risk—these can be retained temporarily but only within ring-fenced environments for limited operational needs, and provided the entity is transparent about the directly related purposes arising from the age check that involve retention for a longer period. The OAIC gives three examples of such directly related purposes:
- audit logging and evidence of compliance: to prove a check has occurred, the outcome, how it was done and when;
- troubleshooting, fraud and circumvention: to investigate errors, suspected spoofing and re-registration attempts; and
- complaints and reviews: to respond to user/parent challenges to the age check or its outcome (like complaint handling, troubleshooting or fraud detection).
The OAIC strongly suggests that entities create a ring-fenced SMMA environment to comply with these destruction requirements and block advertising, analytics and machine-learning pipelines from the environment. Appropriate time-based retention periods should be applied in respect of each category, with information being destroyed once the time period for the last allowed purpose has expired.
The Guidance also confirms that this immediate destruction requirement does not apply to existing data already held by an entity just because that data is used for an SMMA-compliance purpose (eg where a platform provider uses existing data it already holds about a user to determine whether they are under 16 years). Such data must continue to be used, disclosed and destroyed (or de-identified) in accordance with the APPs more generally.
Biometrics
As noted above, the OAIC is clear that inputs (such as document images and selfies or other biometric information) must be destroyed immediately. The OAIC provides the following additional guidance on biometric information and templates more specifically.
- Biometric data should not be first port of call: to minimise privacy impacts on individuals, less sensitive information should be handled over more sensitive information such as age analysis performed on photos or videos or audio analysis on voice.
- Be cautious about reusing biometric information: the OAIC says entities should be 'cautious' about using existing biometric information for SMMA purposes—in our view it is highly likely that entities would need to seek additional, express consent to reuse existing biometric information.
- No watchlists: the OAIC warns against indefinite retention or building watchlists based on biometric identifiers—checks should be event-based and point-in-time, and long-lived behavioural profiles should not be built.
Using existing data for SMMA purposes
As part of preventing users under 16 from having accounts, age-restricted platform providers must assess existing accounts and take steps to de-register users under 16.
The OAIC makes the following key comments regarding using existing data for age assurance purposes.
- No one-size-fits-all approach: the OAIC reiterates eSafety's guidance that although there is no one-size-fits-all approach, it may be possible for platform providers to confirm with a high level of confidence that certain users are 16+ based on pre-existing information such as account tenure or creation date (eg if an account has been held for 12 years, and the user consistently signs in with an AU IP address, it may be reasonable to infer the user is an Australian user over 16).
- Use metadata / system data over behavioural and content data: the use of non-sensitive, non-content signals such as metadata and system data is preferable to using behavioural and content data (such as a user's posts, content engagement, behavioural patterns and voice analysis).
- Consider APP compliance: the APPs continue to apply to any use of existing personal information for SMMA purposes. Platform providers will, for example, need to satisfy themselves of compliance with APP 6 (eg that the user has consented or the use would be reasonably expected by the individual and related to the primary purpose of collection) and APP 10 (that reasonable steps have been taken to ensure the data is accurate, up-to-date and complete).
- Any newly-generated records are subject to section 63F: where a new record (such as a 16+ yes/no token) is created from existing data, that new record is subject to section 63F—ie it may not be used for other purposes (see section Purpose limitation – unambiguous consent for secondary uses) and it must be promptly destroyed (see section Information destruction – 'destruction-on-decision').
Footnotes
-
For more on the community consultation on the implementation of the SMMA regime conducted by eSafety and the Age Assurance Technology Trial, see our article here. For more on the Children's Online Privacy Code currently being developed by the OAIC, see here (although note that the Code is separate to the SMMA regime, being a Code that will seek to impose additional privacy obligations in respect of online services likely to be accessed by children)
-
APP 5.2(c) requires that collection notices 1) state whether a collection of personal information is required or authorised by law; and 2) state the name of the law
-
In Australian Information Commissioner v Australian Clinical Labs Limited (No 2) [2025] FCA 1224, when considering what constituted 'reasonable steps' for the purposes of APP 11 in the Privacy Act 1988 (Cth), Justice Halley referred to judicial consideration of that phrase in the context of other legislation. Citing cases brought with respect to 'reasonable steps' obligations under the Corporations Act 2001 (Cth), he noted that taking reasonable steps requires 'a wholistic analysis, considering the full framework of the entity’s systems, policies and procedures' but does not require a person to find and take 'the optimal steps' or 'the one true path'


