Managing regulatory risk for adaptive and evolving AI systems in healthcare 5 min read
AI adoption in healthcare is accelerating, with tools now influencing everything from administrative efficiency to clinical decision making.
The Therapeutic Goods Administration (TGA) has updated its guidance to clarify how existing medical device regulations apply to AI-enabled technologies. This update is particularly relevant for healthcare innovators, developers and providers navigating the regulatory landscape for AI-driven tools.
In this Insight, we unpack the TGA’s updated guidance on AI‑enabled medical devices, explaining when AI software becomes a regulated medical device, how evolving systems should be managed and what practical steps organisations can take to stay compliant.
Understanding the TGA’s guidance
Earlier this year, we hosted our Risk and Reward: AI Use in Healthcare event at Allens, bringing together a panel of healthcare innovators, clinical trial experts and legal and regulatory specialists to discuss how artificial intelligence (AI) is being deployed across the healthcare sector, and how organisations can navigate the opportunities and the risks that accompany its use.
The TGA is playing an increasingly important role in this landscape, and its updated guidance applies to a wide range of software tools used in healthcare, including:
- applications that use image analysis to support diagnosis
- cloud-based analytics that predict patient deterioration
- chatbots that suggest, deliver or monitor treatment
- tools that otherwise support clinical decision making.
While the guidance does not create new rules, it clarifies how the existing regulatory framework applies to the unique challenges of AI in healthcare technologies. It highlights the importance of transparency, accountability and risk management as AI becomes more deeply embedded in patient care and clinical workflows.
Key takeaways
When does an AI-enabled software become a medical device?
AI‑enabled software will generally be regulated as a medical device where it is intended to be used for therapeutic purposes, such as diagnosis, prevention, monitoring, prediction, prognosis or treatment of disease in an individual. Intended purpose is assessed objectively by reference to how the product is designed, described, marketed and used in practice, including its functionality and instructions for use.
Software that assists a human clinician to make a clinical decision is exempt from a requirement for inclusion on the ARTG, but is nevertheless regulated as a medical device.
Ultimately, where AI‑enabled software is intended to influence, inform or replace clinical decisions, organisations should expect it to fall within the medical device framework and be prepared to meet the corresponding obligations of a sponsor and/or manufacturer—including ensuring conformance with the Essential Principles (and maintaining evidence of the same), and having effective systems in place for monitoring and reporting of safety incidents and adverse events.
Managing scope creep in adaptive systems
One of the most significant aspects of the guidance concerns how evolving products are managed. Unlike traditional medical devices, AI systems are often iterative, with functionality expanding through software updates and model retraining.
The TGA cautions organisations to remain alert to scope or feature creep, where incremental changes alter a product’s intended purpose in ways that bring it within—or further into—the regulatory framework. Where updates introduce new functionality that changes how the software is intended to be used, those changes must not be implemented until the appropriate regulatory approvals have been obtained.
This underscores the importance of robust governance and monitoring processes to ensure regulatory impacts are assessed before changes are deployed, particularly for adaptive AI systems.
Off-label use
The guidance also takes a firm position on off‑label use. The TGA expects active intervention if a manufacturer becomes aware that its AI system is being used outside its intended purpose. For example, where a general‑purpose large language model is being used to provide diagnostic or treatment advice, the manufacturer may be expected to:
- implement controls to prevent further off‑label use; or
- cease supply and seek regulatory approval for the expanded intended purpose.
While this approach has prompted debate, particularly for developers of general‑purpose or configurable AI tools, it reflects a consistent regulatory principle: a sponsor's responsibility for a medical device does not end at deployment. Post-market monitoring ensuring consistency between the marketing authorisation and the real-world uses of the device is a continuing obligation.
Evidence, transparency and explainability
The guidance reinforces expectations around evidence and transparency. Manufacturers of AI‑enabled medical device software must comply with the TGA's evidence requirements, ensuring sufficient transparency to support assessment of safety and performance, including in relation to training data, validation and ongoing performance monitoring.
Synthetic data
The TGA also addresses the use of synthetic data—artificially generated data used to augment or replace real‑world data for training or validation. The TGA notes that while synthetic data may supplement real‑world data in some circumstances, it will generally not replace clinical data for the purposes of demonstrating safety and performance.
Practical takeaways for organisations deploying AI
Organisations should consider the following practical steps in light of the TGA's guidance:
Be precise about intended purpose from day one
Clearly define what your AI‑enabled software is (and is not) intended to do, and ensure this is consistently reflected across product design, user interfaces, training materials, marketing and contractual documentation.
Treat updates as regulatory events
Even incremental changes can affect intended purpose or risk profile. Regulatory assessment should be embedded into product development and release processes, not treated as a one‑off exercise.
Monitor real-world use
Organisations should not assume that products will always be used as intended. Governance frameworks should include mechanisms to identify and respond to off‑label use, particularly for configurable or general‑purpose AI tools deployed in clinical settings.
Plan for evidence and explainability early
Regulators, clinicians and patients may expect clarity on how an AI system works, how it was trained and how its outputs should be interpreted. Evidence strategies, including validation and performance monitoring, are most effective when designed from the outset.
Be realistic about the role of synthetic data
Synthetic data is not a substitute for real-world data. While synthetic data can supplement training and validation, organisations should plan on the basis that synthetic data will rarely be sufficient on its own to demonstrate safety and performance for regulated medical device software.


