Supporting safe and responsible use of AI 9 min read
The regulatory approach to AI in Australia is still in its early stages, with the Federal Government currently consulting on the best way to implement regulation that supports the safe and responsible adoption of AI.
While this consultation is ongoing, a number of existing regulatory regimes continue to apply to the development, use and implementation of AI in specific contexts. Work, health and safety (WHS) laws form a key part of this framework for managing AI-related risks in the workplace. However, the use of AI technologies can also introduce novel risks and exacerbate existing ones, including in relation to WHS.
In this Insight, we outline why it is critical for organisations to ensure that WHS risk management practices are effectively applied to this emerging context.
Key takeaways
- WHS laws apply to risks associated with AI use in business. Regulatory and government guidance is increasingly focused on this application.
- Many businesses will be required to invest in WHS system improvement to enable fit-for-purpose risk management for this context. Clear definition of WHS roles and responsibilities for AI is a good starting point.
- Whilst AI offers the potential for step-change in improvement of WHS risk controls more broadly, its use and implementation can also pose emergent risks. Businesses must monitor the practicability of emerging solutions as part of discharging their duty of care.
WHS regulation for AI
Like many contexts, WHS laws do not yet provide specific, detailed risk control requirements for AI. Rather, they require businesses to adopt a risk-based approach and consider AI risks as part of their WHS risk management plans as appropriate to enable them to respond.
Safe Work Australia's Strategy highlights the need for appropriate oversight of AI systems to ensure workers are not exposed to novel or amplified WHS risks. The strategy calls on businesses to define roles and responsibilities for WHS with respect to AI, consult with stakeholders to identify early warning signs for WHS hazards, and to contribute to continuing research in the field.
WHS regulators are also focussing on this approach. For example, in NSW, the Centre for Work Health and Safety published research developing an AI WHS Scorecard as a tool for identifying direct WHS hazards that might be associated with AI in the workplace. In Queensland, WorkSafe is promoting the ISO/IEC 42001, Information Technology - Artificial intelligence - Management System, which specifies standards for establishing AI management systems. In Western Australia, WorkSafe has included the issue in its Emerging Challenges Strategy.
The Government has published a Voluntary AI Safety Standard (Voluntary Standard) which provides a principled-based strategy for using AI safely. The Voluntary Standard has been published alongside the Proposals paper for introducing mandatory guardrails for AI in high-risk settings (Proposals Paper), and the principles articulated in the Voluntary Standard form the basis of the mandatory guardrails. The Voluntary Standard calls for the implementation of guardrails that ensure appropriate risk-based processes are applied to the development, implementation and use of AI, including:
- establishing accountability processes and internal governance over AI use (Guardrail 1);
- establishing and implementing risk management processes to identify and mitigate risks (Guardrail 2); and
- enabling human intervention in AI systems (Guardrail 5) and establishing processes to enable impacted individuals to challenge AI generated outcomes (Guardrail 7).
For more detail, see our Insight on the Voluntary Standard and the proposal for mandatory guardrails.
A recent Australian Senate inquiry recommended introducing a specific positive duty for WHS with respect to AI technologies. The Final Report tabled by the Select Committee in November 2024 recommended that the Government extend and apply WHS laws to workplace risks posed by the adoption of AI. While this was criticised by some stakeholders, it was welcomed by others, including trade unions. The Australian Institute of Work Health and Safety recently called for AI systems to be designed with worker wellbeing at their core, not as an afterthought.1
WHS offences may also be committed by AI without any specific human intent. His Honour Justice Edelman of the High Court of Australia recently reviewed Quoine Pte Ltd v B2C2 Ltd2 in which a corporation was held vicariously liable in contract law for contracts entered into as a result of automated processes associated with cryptocurrency trading. In that case a company had developed a trading platform which, due to an oversight, executed trades at a rate approximately 250 times the market rate. The company was unable to argue that the trades were void due to this automated error. Justice Edelman highlighted the attribution of the acts of the system to the corporation. The same principles could be applied under WHS legislation.
WHS risks linked to the use of AI
Different AI technologies present varying types and levels of risk. It is important that organisations understand the risks attached to their particular uses of AI—both in terms of novel risks and existing WHS ones that AI might amplify.
We asked a well-known AI model to identify key health and safety risks associated with AI technology that might need to be managed, and worked with the output to demonstrate some key examples below.
Category | Examples |
---|---|
Psychological health risks |
|
Physical safety risks |
|
Decision-making risks |
|
Ergonomic risks |
|
Cyber and data risks |
|
Amplification of existing risks
The Proposals Paper specifically highlights how differences in the process for developing an AI system compared with a traditional software system can lead to AI amplifying existing risks. Software systems have traditionally been developed with significant human oversight throughout the development process—where individuals must specifically define the logic for the system at every decision point. This process means the systems are easier for humans to control, predict and understand. In comparison, AI systems often employ 'machine learning algorithms' which help systems make decisions without explicit programming. This approach means systems are less transparent and harder to analyse and explain, which leads to amplified risks of harm, particularly in contexts like WHS where it is important to be able to explain the reasons for a decision, or how a decision was achieved.
As a high-profile example, reliance on AI systems in a role traditionally completed by humans led to a significant safety risk recently at the Melbourne Cricket Ground (MCG). The MCG implemented an AI system to conduct security screening of patrons as they entered the stadium—the first of its kind in Australia. However, two men allegedly brought firearms into the venue without being stopped by security.
The failure was attributed to a human error which occurred when the new AI security screening unit identified 'items of concern' and flagged the men for further investigation, but the secondary manual screening process was lacking. This example shows how a lack of education about procedures, inadequate allocation of roles and responsibilities and insufficient awareness of new systems can lead to a significant safety risk.
Harnessing AI-driven opportunities in WHS
While the focus on risk management is important, the flipside is that AI also has vast potential to bring step-changes in safety management across the broader spectrum of WHS risks.
Recent research has indicated that AI and human interaction can foster motivation and engagement, enhancing safety performance.3 AI offers significant advantages in real-time operations and monitoring of industrial plant and equipment condition and maintenance. Improvements in environmental and technical monitoring and planning are available. AI driven drones and robots can be deployed in hazardous environments and emergencies as required.
AI also provides improvements in worker engagement with WHS systems, such as through virtual simulations in training and AI-based safety management that can guide workers through safe operating procedures in real time. Enhanced reality technologies can support interaction with hazardous environments and enable direct, remote supervision. AI-driven wearable technology also enables organisations to track the health and safety of workers in real time.
Some organisations are also moving forward with exploring predictive analytics in the field, including by identifying patterns in data that may signal emerging WHS risks. This may include monitoring worker behaviour and sentiment analysis, helping managers identify the need for early interventions. For example, predictive fatigue risk management systems are being trialled in mining contexts to highlight critical risks to supervisors to prevent fatigue-driven accidents before they occur.
It will be part of a business's duty of care to explore emerging technologies, on a continuing basis, to assess whether there is an evidence-based case to adopt new options for management of WHS risks. The opportunities offered by AI solutions must be considered, including balancing whether the AI solutions might also exacerbate WHS risks. Ultimately, where these solutions are assessed as providing improvements that are available and suitable, then—provided the risk can be appropriately managed and the cost is not grossly disproportionate to the risk—the opportunity should not be ignored.
Actions you can take now
Now is the time for organisations to review their existing WHS management and systems to ensure they account for the unique ways in which AI-related risks can manifest. Meeting WHS obligations may require investment in people and systems, but this is likely to be offset by the value created through safe and healthy adoption.
To manage AI risks effectively, businesses should invest in WHS system improvements to enable fit-for-purpose risk management for this emerging technology. Below are two key considerations that should inform this process:
- First, a clear definition of WHS roles and responsibilities for AI is required. In some circumstances there may be a need to shift accountability for WHS risk management away from the 'shop floor' and into the hands of technical management and software managers with the technical expertise to identify and control specific AI risks during programming. We recommend specific WHS legal training interventions at the earliest stages of technology development.
- Second, WHS systems will need to be updated to respond to new contexts. Development of AI-specific WHS risk assessment tools that are bespoke to the specific industry and AI application will be required. This will require proper consideration of risk controls from the design stage.
Many businesses are already developing AI risk management systems and governance processes at a broader level to respond to related risks such as security and consumer protection, including to start to bring their processes in line with the guardrails in the Voluntary Standard. If this is occurring, it is also critical to ensure the approach taken through these plans is consistent with the standard of care that applies under WHS laws (ie risk is managed so far as is reasonably practicable). Alternative standards based on pure commercial cost benefit analysis may fall short when a WHS regulator reviews them following an incident or complaint.
This all serves to highlight that many existing WHS management teams will be required to upskill to support technological risk management and to broaden their internal interfaces. Companies must face these challenges head on with investment sufficient to meet their duty of care under WHS legislation.
Footnotes
-
Australian Institute of Health & Safety, Warning of Impact of AI This Workers’ Memorial Day (28 April 2025)
-
[2020] SGCA(I) 02.
-
Yunshuo Liu and Yabnin Li, 'Does human-AI collaboration promote or hinder employees’ safety performance? A job demands-resources perspective' (2025) 188 Safety Science.