INSIGHT

Governance doesn't stand still: 9 FAQs to help understand the Government's new Guidance for AI Adoption

By Valeska Bloch, Isabelle Guyot, Rachel Griffith
AI Boards & NEDS Cyber Data & Privacy Technology, Media & Telecommunications

Actionable steps for organisations adopting AI responsibly 6 min read

Just a year after the publication of the Australian Voluntary AI Safety Standard (VAISS), the National AI Centre (NAIC) has published new Guidance for AI Adoption. The Guidance for AI Adoption updates (and replaces) the VAISS.1

The 2025 updates both streamline AI expectations and reflect a maturing of the operational governance framework for AI. It adds detailed, actionable implementation steps and also speaks to developer requirements.

We've set out below some key FAQs to help you understand the new Guidance for AI Adoption.

How is the Guidance for AI Adoption different to the Voluntary AI Safety Standard?

The Guidance for AI Adoption:

  • Condenses the 10 voluntary guardrails established in the VAISS into 6 essential practices to establish basic responsible AI governance (which can be expanded as an organisation's AI use grows, or governance capabilities mature).
  • Expands the scope of the guidance to AI developers as well as deployers.
  • Introduces a dual-structured approach:
    • Guidance for AI Adoption: Foundations – which sets out essential practices for getting started in responsible AI governance and is designed for:
      • organisations starting out in AI adoption and governance
      • organisations using AI in low-risk ways
      • professionals who are new to AI and AI governance
      • professionals looking for general guidance on best practice when using AI in business contexts.
    • Guidance for AI Adoption: Implementation practices – which offers comprehensive, step-by-step instructions on how to implement the essential practices and is designed for use by organisations with mature governance structures, technical development capabilities or high-risk use cases.
What if I have already used the VAISS to prepare my organisation's AI governance framework?

The practices established by the VAISS have been integrated into the Guidance for AI Adoption: Implementation practices and the NAIC has helpfully mapped the VAISS to the practices in Guidance for AI Adoption: Implementation Practices. See here: VAISS x Implementation practices crosswalk

What are the six essential practices for good AI governance?

The practices are designed so that you don't need to implement everything all at once. Once you've established baseline AI good governance ('Getting Started'), you can add more actions ('Next steps') as your organisation's AI use grows or your governance capabilities mature:

 

PRACTICE GETTING STARTED NEXT STEPS
Decide who is accountable  
  • Assign a senior leader as the overall AI governance owner
  • Create an AI policy that sets out how your organisation will use AI responsibly
  • Make a specific person accountable for every AI system your organisation uses
  • Train your accountable people so they can make informed decisions about AI’s risks and behaviours
  • Clarify supply chain accountabilities
  • Turn your AI policy into a governance framework  
Understand impacts and plan accordingly  
  • Carry out a stakeholder impact assessment
  • Set up channels for people to report problems, challenge or question AI decisions  
  • Engage your stakeholders early and continue engaging them throughout the AI lifecycle
  • Identify systemic unwanted impacts by monitoring patterns of feedback  
Measure and manage risks  

Create a risk screening process to identify and flag AI systems and use cases that pose unacceptable risk or require additional governance attention

See our Guide to Conducting AI Risk Assessments
  • Set up risk management processes that account for the differences between traditional, narrow, general purpose and agentic AI systems
  • Conduct risk assessments and create mitigation plans for each specific use case and identified impacts in that context See our Guide to Conducting AI Risk Assessments
  • Apply risk controls based on the level of risk for each of your specific uses of AI
  • Create processes to investigate, document and analyse AI-related incidents  
Share essential information  
  • Create and maintain an AI register
  • Disclose your use of AI
  • Identify, document and communicate AI system capabilities and limitations
  • Be transparent across your AI supply chain
  • Set up ways to explain AI outcomes, especially when they affect people  
Test and monitor  
  • Ask for proof that your AI system has been properly tested
  • Test before you deploy a system
  • Monitor your system after you deploy it
  • Extend your data governance and cybersecurity practices to your AI systems
  • Stress-test your AI system to spot issues or vulnerabilities before others do
  • Consider getting independent testing of your AI systems  
Maintain human control  
  • Ensure meaningful human oversight of your AI system
  • Build in human override points
  • Provide training to people overseeing AI systems
  • Maintain alternative pathways to ensure your organisation’s critical functions can continue even if your AI systems malfunction or are being retired  
What other tools and resources are available?

The Guidance for AI Adoption recommends the use of certain documentation and tools to assist in the management of AI risk. The NAIC has prepared templates for some of that documentation, including:

We have also prepared a Guide to Conducting AI Risk Assessments.

What about supply chain risk?

The publication of the Guidance for AI Adoption coincided with the release of a new publication by the Australian Signals Directorate (ASD), Artificial intelligence and machine learning: Supply chain risks and mitigations. Due to its reliance on a complex ecosystem of models, data, libraries and cloud infrastructure, AI and machine learning (ML) can introduce distinct cybersecurity challenges to a supply chain.

The ASD publication highlights the importance of AI and ML supply chain security and outlines the key risks and mitigations organisations should consider when developing or procuring an AI system.

You can find tips and considerations for procuring AI systems in our Guide to AI Procurement.

Will there be more guidance from Government?

Yes. The Government's intention is that the NAIC will develop and publish further tools and resources to assist organisations in their adoption of AI. The NAIC expects to roll those tools and resources out over the next 12 months.2

What should I do now?
  • If your organisation hasn't established its AI Governance Framework, now is great time to do so. Take a look at the six essential practices and 'Getting started' and go from there.
  • If your organisation already has an AI Governance Framework, undertake a gap analysis of your framework against the six essential practices and the 'next steps' to see if any updates are required.

If you have any questions or would like any assistance in preparing or reviewing your AI Governance Framework, reach out to our team.

What is the National AI Centre?

The NAIC was established in 2021 to support and accelerate AI industry in Australia. It is a part of the Australian Government's Department of Industry, Science and Resources. More information is available here.

Why was the Guidance for AI Adoption prepared?

The update follows extensive feedback received by NAIC throughout 2024-25 as part of consultation on extending VAISS practices to developers. That feedback went beyond the extension of VAISS practices to developers. Most industry stakeholders were seeking more accessible, actionable and streamlined guidance which could be tailored to both technical and non‑technical audiences, in particular SMEs.

The Guidance was also designed to respond to the results of the 2025 Responsible AI Index, which surveyed the state of responsible AI across a range of sectors and organisations. The report on the Responsible AI Index found that:

  • there is still a 'saying-doing' gap between respondents who agreed with ethical AI performance standards and those organisations that had actually implemented responsible AI practices; and
  • smaller organisations find it more challenging to implement resource-intensive AI governance practices.