Key questions boards should be asking 6 min read
Anthropic recently announced Claude Mythos Preview, an AI model that it says can autonomously find and exploit previously unknown security vulnerabilities 'in every major operating system and web browser'.1
Through Project Glasswing, it is giving early access to select technology and critical infrastructure companies to find and patch vulnerabilities before Mythos is more widely released (and accessible to cybercriminals). Boards should assume those capabilities will soon be available to threat actors, if they are not already. There are reports of unauthorised access to the model.
The implications are serious and require board-level oversight. Mythos-class AI makes it easier for more attackers to find and exploit vulnerabilities at speed and scale—in some cases, before defenders can respond. Traditional cyber vulnerability management and incident response practices are unlikely to be sufficient to manage these evolving threats.
Echoing these sentiments, the Australian Prudential Regulation Authority (APRA) has issued a letter to its regulated entities, warning that AI adoption is significantly altering the cyber threat landscape by expanding possible attack options and increasing the frequency of cyberattacks. It also noted that information security practices (especially security testing programs and remediation activities) are not keeping pace with 'the AI augmented threat environment', and that AI governance maturity and assurance is generally inadequate.2 APRA has been working with its regulated population (and other government agencies) to address rising cyber threats linked to advanced AI frontier models, such as Claude Mythos.
APRA expects boards to be AI-literate, and actively engaged in the oversight of AI risks and their entity's AI usage and strategy (to ensure alignment with risk and tolerance levels).
All of this means that companies are on notice. For directors, this shifts the calculus on what constitutes adequate cybersecurity oversight. Boards that have not satisfied themselves that management is actively assessing and responding to these capabilities and broader AI risks expose their companies and themselves to the reputational and financial consequences of a foreseeable material cyber incident—including heightened exposure to regulatory action and litigation risk.
This Insight contains the questions boards should be asking management to help directors understand the risk, and explains what management should do now.
Questions for boards
Governance and oversight
1. Understanding the threat What specific risks do Mythos-class capabilities pose to our business and are we adequately resourced to address them? How frequently will management update the board and how we will know if the risk profile changes? Has management quantified our potential financial, operational and reputational exposure if Mythos-class capabilities are used against us?
2. Risk assessment Do these capabilities warrant a new cyber risk assessment, and if so, when will it be completed and reported to the board?
3. Audit and risk framework Does our audit and risk framework account for the speed and scale of AI-driven threats? Do our internal risk ratings require updating?
4. Insurance Are our cyber insurance policy terms, limits and exclusions adequate for AI-driven attacks at this scale? Should our policies be amended or renewals be accelerated?
5. Board expertise and advice Does the board have access to sufficient cybersecurity expertise (whether through board composition, management reporting or external advisors) to critically evaluate management's responses to these questions, and oversee the organisation's preparedness on an ongoing basis?
Legal and regulatory risk
6. Legal and regulatory exposure What (if anything) are our regulators saying to us about how we should be addressing these threats (eg APRA has been meeting with banks to gauge preparedness against AI-driven attacks)? Are we tracking evolving regulatory and Australian Cyber Security Centre guidance to inform assessments as to what is reasonable or adequate to meet regulatory requirements and regulator expectations (including under the Privacy Act 1988 (Cth), CPS 230, CPS 234 and the Security of Critical Infrastructure Act 2018 (Cth) (the SoCI Act))? Do our vulnerability scanning practices need to be reassessed in light of Mythos-class capabilities (eg do regulators expect AI-assisted scanning to meet current compliance obligations)?
7. Disclosures Do vulnerabilities uncovered by Mythos (or similar) trigger notifications under the SoCI Act, CPS 234, market disclosure rules, or other regulatory reporting requirements?
8. Documenting decisions Are we documenting our remediation prioritisation decisions and supporting rationale, to establish a defensible record in case of litigation or regulatory enforcement?
Supply chain exposure
9. Vendor and third-party risk management How are we ensuring that our third parties are also adapting to the changed threat landscape? What is our plan and timeline for assessing the readiness of key vendors, partners and other third parties to meet these threats? Do we need to amend our contracts to require accelerated notification of security events (including zero-day vulnerabilities)?
10. Glasswing vendors Which of our key vendors and suppliers are participating in Project Glasswing, and how will resulting fixes and insights flow through to our environment?
11. Open-source and supply chain dependencies What is our exposure to vulnerabilities in open-source components embedded in our systems and products?
Breach readiness
12. Response plans Have our cyber incident response plans and playbooks been tested against the speed and scale of Mythos-class threats, including through tabletop exercises simulating AI-driven attack scenarios?
13. Detection and response What are our current capabilities for detecting and responding to AI-enabled unauthorised activity, what gaps have been identified, and what is the plan to address them?
Technical defences and operational resilience
14. Patching Are our patch cycle timelines sufficient, given the speed of Mythos-class vulnerability discovery, and what is the plan for accelerating our patching framework?
15. Defensive AI Are we using AI tools to identify vulnerabilities in our systems? If not, what is the timeline for adoption, and are we positioned to integrate more advanced AI-powered security tools as they become available?
16. Unauthorised access risk What safeguards and guardrails are in place to ensure our teams do not inadvertently scan or exploit another party's networks when using AI-assisted security testing tools?
17. Legacy IT Is there a funded roadmap to address legacy technical debt with urgency, given AI's ability to find vulnerabilities in aged, complex systems at speed? What compensating controls are in place for operational technology environments that rely on legacy industrial control systems incapable of receiving security patches?
18. Vulnerability disclosure programs How are we adapting those programs to account for the volume and speed of AI-driven discovery?
What management should do now
19. Brief the board Provide the board with a clear assessment of the organisation's exposure to Mythos-class AI capabilities, including an honest appraisal of current gaps and resource constraints, and establish a regular reporting cadence.
20. Reassess cyber risk Commission or update a cyber risk assessment that specifically accounts for AI-driven vulnerability discovery and exploitation at scale.
21. Review legal and regulatory obligations Work with Legal and Compliance to map current and emerging obligations (including under the SoCI Act, CPS 234, CPS 230 and market disclosure rules) against the changed threat landscape. Ensure vulnerability scanning and remediation practices meet evolving standards of reasonableness.
22. Document decisions Maintain a defensible record of remediation prioritisation decisions and supporting rationale, particularly where resource constraints require trade-offs.
23. Engage your supply chain Assess the readiness of third parties (particularly key vendors) and update contractual requirements where necessary to ensure accelerated notification and remediation. Identify which suppliers are participating in Project Glasswing and what that might mean for your organisation.
24. Test response plans Conduct tabletop exercises and stress-test incident response plans against AI-driven attack scenarios that reflect the speed and scale of Mythos-class capabilities.
25. Accelerate technical defences Review and, where necessary, accelerate patch cycle timelines, evaluate AI-powered defensive tools, and prioritise remediation or replacement of legacy IT systems.
26. Strengthen fundamentals Ensure patch management processes are current and consistently applied, enforce least privilege access and review access controls, roll out phishing-resistant multi-factor authentication across the organisation, and maintain a comprehensive and up-to-date asset inventory.
27. Review insurance coverage Assess whether current cyber insurance terms, limits and exclusions are adequate for AI-driven attacks, and engage insurers early if renewals or adjustments are needed.


