Actionable steps for organisations adopting AI responsibly 6 min read
Just a year after the publication of the Australian Voluntary AI Safety Standard (VAISS), the National AI Centre (NAIC) has published new Guidance for AI Adoption. The Guidance for AI Adoption updates (and replaces) the VAISS.1
The 2025 updates both streamline AI expectations and reflect a maturing of the operational governance framework for AI. It adds detailed, actionable implementation steps and also speaks to developer requirements.
We've set out below some key FAQs to help you understand the new Guidance for AI Adoption.
The Guidance for AI Adoption:
- Condenses the 10 voluntary guardrails established in the VAISS into 6 essential practices to establish basic responsible AI governance (which can be expanded as an organisation's AI use grows, or governance capabilities mature).
- Expands the scope of the guidance to AI developers as well as deployers.
- Introduces a dual-structured approach:
- Guidance for AI Adoption: Foundations – which sets out essential practices for getting started in responsible AI governance and is designed for:
- organisations starting out in AI adoption and governance
- organisations using AI in low-risk ways
- professionals who are new to AI and AI governance
- professionals looking for general guidance on best practice when using AI in business contexts.
- Guidance for AI Adoption: Implementation practices – which offers comprehensive, step-by-step instructions on how to implement the essential practices and is designed for use by organisations with mature governance structures, technical development capabilities or high-risk use cases.
- Guidance for AI Adoption: Foundations – which sets out essential practices for getting started in responsible AI governance and is designed for:
The practices established by the VAISS have been integrated into the Guidance for AI Adoption: Implementation practices and the NAIC has helpfully mapped the VAISS to the practices in Guidance for AI Adoption: Implementation Practices. See here: VAISS x Implementation practices crosswalk
The practices are designed so that you don't need to implement everything all at once. Once you've established baseline AI good governance ('Getting Started'), you can add more actions ('Next steps') as your organisation's AI use grows or your governance capabilities mature:
| PRACTICE | GETTING STARTED | NEXT STEPS |
|---|---|---|
| Decide who is accountable |
|
|
| Understand impacts and plan accordingly |
|
|
| Measure and manage risks |
Create a risk screening process to identify and flag AI systems and use cases that pose unacceptable risk or require additional governance attention See our Guide to Conducting AI Risk Assessments |
|
| Share essential information |
|
|
| Test and monitor |
|
|
| Maintain human control |
|
|
The Guidance for AI Adoption recommends the use of certain documentation and tools to assist in the management of AI risk. The NAIC has prepared templates for some of that documentation, including:
- AI Screening Tool to identify and flag AI use cases that are higher risk
- AI Register Template to list the AI systems your organisation uses
- AI Policy Guide and Template.
We have also prepared a Guide to Conducting AI Risk Assessments.
The publication of the Guidance for AI Adoption coincided with the release of a new publication by the Australian Signals Directorate (ASD), Artificial intelligence and machine learning: Supply chain risks and mitigations. Due to its reliance on a complex ecosystem of models, data, libraries and cloud infrastructure, AI and machine learning (ML) can introduce distinct cybersecurity challenges to a supply chain.
The ASD publication highlights the importance of AI and ML supply chain security and outlines the key risks and mitigations organisations should consider when developing or procuring an AI system.
You can find tips and considerations for procuring AI systems in our Guide to AI Procurement.
Yes. The Government's intention is that the NAIC will develop and publish further tools and resources to assist organisations in their adoption of AI. The NAIC expects to roll those tools and resources out over the next 12 months.2
- If your organisation hasn't established its AI Governance Framework, now is great time to do so. Take a look at the six essential practices and 'Getting started' and go from there.
- If your organisation already has an AI Governance Framework, undertake a gap analysis of your framework against the six essential practices and the 'next steps' to see if any updates are required.
If you have any questions or would like any assistance in preparing or reviewing your AI Governance Framework, reach out to our team.
The NAIC was established in 2021 to support and accelerate AI industry in Australia. It is a part of the Australian Government's Department of Industry, Science and Resources. More information is available here.
The update follows extensive feedback received by NAIC throughout 2024-25 as part of consultation on extending VAISS practices to developers. That feedback went beyond the extension of VAISS practices to developers. Most industry stakeholders were seeking more accessible, actionable and streamlined guidance which could be tailored to both technical and non‑technical audiences, in particular SMEs.
The Guidance was also designed to respond to the results of the 2025 Responsible AI Index, which surveyed the state of responsible AI across a range of sectors and organisations. The report on the Responsible AI Index found that:
- there is still a 'saying-doing' gap between respondents who agreed with ethical AI performance standards and those organisations that had actually implemented responsible AI practices; and
- smaller organisations find it more challenging to implement resource-intensive AI governance practices.


