Australian businesses are increasingly at risk of US-style lawsuits 12 min read
Artificial intelligence is often discussed through the lens of algorithms and data, but its true ecosystem is far more expansive. Today’s AI capability is the product of an intricate value chain in which hardware and software are deeply intertwined. This spans the specialised chips that power model training, the digital infrastructure housed in data centres, the cloud platforms that enable scale, and the software frameworks and applications that bring AI into users' hands. AI use implicates companies not only through their direct deployment of tools, but also through their suppliers' activities and their customers' behaviour.
With Australia following the rest of the world in its rapid uptake of AI technologies, we expect that the private litigation trends that have crystallised in the US and elsewhere will take shape here. Agentic AI's anticipated growth also gives rise to novel challenges and risks, and there is a focus on what AI's responsible and sustainable evolution should look like.
Private litigation risk does not arise in a vacuum. In Australia, the regulatory environment surrounding AI is evolving rapidly, and this trajectory is shaping the landscape of potential private claims. Alongside that, the existing regulatory frameworks (regarding, for example, prohibitions on misleading or deceptive conduct) obviously continue to apply to AI systems. The Australian Securities and Investments Commission and the Australian Competition and Consumer Commission have also each signalled that AI-related conduct is a priority.
For companies, this creates a layered compliance environment in which voluntary standards, sector-specific regulation and existing obligations at law are interacting.
In this Insight, we take a ‘bird’s‑eye view’ of some of the principal types of private litigation risk that may arise for companies operating at different stages of the AI value chain.
Key takeaways
- In 2024 and 2025, we observed increasing AI-related litigation against US companies by their spectrum of stakeholders. It is conceivable that each of these types of suits might be brought against Australian businesses.
- Australia's class action regime is well suited to aggregating AI-related harms, such as allegations of discrimination arising from algorithmic decision-making. Companies deploying AI at scale should consider their exposure not only to individual claims but to the risk that a single systemic failure—whether in model design, training data or deployment—could give rise to a group proceeding.
- The rapid emergence of agentic and highly autonomous AI systems adds a further layer of complexity. These systems raise novel questions about foreseeability, control, and responsibility for downstream harms.
- The AI value chain is inherently global. This fragmentation can create significant jurisdictional complexity, depending on the nature of a dispute. Companies operating across borders should seek to understand their exposure under the laws of each jurisdiction in which their AI systems are deployed or have effect, and take care that contractual arrangements—including with upstream AI vendors and downstream distributors—allocate responsibility for cross-border compliance and liability.
Litigation and complaints risks in the AI hardware value chain
Environmental and social impacts
| Who this relates to: companies involved in the upstream and mid‑stream AI hardware supply chain whose activities may attract scrutiny on issues of environmental and social performance—from critical mineral miners and refiners, to chip and component manufacturers, to data centre developers and operators, as well as their banks and funders. |
A global hardware supply chain that is both capital- and resource‑intensive underpins AI capability's rapid expansion. As demand for specialised chips, high‑performance computing equipment, and supporting infrastructure accelerates, manufacturers and upstream suppliers are facing increasing scrutiny of the environmental and social impacts associated with their operations.
Beyond the risk of formal litigation, companies involved in the AI hardware ecosystem are also encountering a rise in non‑judicial complaints and activism. Civil‑society groups, investors and community stakeholders are increasingly using public campaigns to challenge the sourcing of minerals, the environmental footprint of data‑centre construction and operation, and the broader sustainability profile of AI‑enabling infrastructure. This incorporates a focus on environmental capital: in March 2026, the AFR reported on short-sell campaigns by Australian activists, using bespoke AI tools that purport to assess sustainability commitments for likelihood of fraud or misrepresentation, including attention to allegations of greenwashing by data centre owners and operators.
These developments are intersecting with broader ESG and sustainability trends around issues such as climate change, biodiversity impacts and water security.
- Critical minerals: the extraction of critical minerals in high‑risk jurisdictions—often involving complex geopolitical, labour and sustainability issues—creates potential exposure to claims of misleading disclosures, inadequate due diligence processes, or failures to manage foreseeable human rights or environment-related harms.
- Digital infrastructure: real-world infrastructure such as data centres have the potential for environmental and social impacts, due to their demand for electricity, water and land, and because they can generate allegations of e-waste and noise pollution. There can be the potential for human rights impacts in supply chains, including from the components of renewable energy projects that centres might be co-located with. Depending on location, there might be the potential for impacts on the rights and cultural heritage of Indigenous peoples or vulnerable landowners.
Together, these trends signal that companies operating anywhere along the hardware value chain—from mining and refining, through to chip fabrication and data centre development—may find themselves navigating a more assertive landscape of private claims, quasi regulatory complaints and stakeholder‑driven activism. Understanding these dynamics is increasingly essential to maintaining social licence and managing litigation risk.
AI‑enabling infrastructure: construction and development disputes
| Who this relates to: data centre owners, operators and private capital. |
The current data centre build cycle is entering a risk-intensive phase, driven by powerful demand from digitisation and AI. Capital is being deployed well ahead of data centre energisation and revenue, while power and water constraints, tighter planning and environmental scrutiny, and a thinning insurance market, all compress the margin for error. At the same time, rapid changes in chips, cooling and architecture increase the chance that designs may shift mid program.
The following are where disputes most commonly arise in the current data centre build cycle:
- Utility capacity and approvals risk: an acute risk is a physically complete site that cannot be energised, cooled or commissioned due to grid capacity, water allocations, or delays in approvals—eg regulatory scrutiny of network investment and demand forecasting is tightening. The Australian Energy Regulator has challenged Jemena’s bid to recover $2 billion from customers over the five years to 2031, questioning its assumption of a doubling in electricity demand to support data centres. These debates and connection queues have the potential to delay or reduce capacity deliveries, forcing principals into prolonged carrying costs and resequencing.
- Insurance and resilience gap: mega-scale AI campuses are outgrowing traditional insurance capacity, so many projects can only obtain maximum foreseeable loss cover (which estimates the highest damage a business could suffer based on certain assumptions), leaving gaps in coverage for catastrophic events and, critically, business interruption for non-physical triggers (eg power or water outage). This makes insurance a common source of disputes. Insurers are cautious as to bushfire, flood and storm exposures, and critical infrastructure and geopolitical threats, so securing coverage may take longer, cost more and often include carve-outs. For example, a power outage lasting less than one hour could eliminate up to six months of revenue due to strict penalty clauses in the leasing contracts. These constraints flow into disputes when incidents occur, including whether the event is insured, how business interruption is calculated, and who bears the loss if the policy does not respond.
- Technology advances and changes: rapid shifts in chip thermal design power, rack density, liquid cooling standards and architecture create a significant risk that a facility's design is functionally outdated before it is commissioned. The pace of change in AI hardware in particular means that specifications agreed at the outset of a multi-year build may no longer reflect prospective tenants' requirements by practical completion, leaving the sponsor with a facility that is technically operational but commercially misaligned with market demand. The material risk of overbuild is also present, where capacity is delivered ahead of monetisable workloads, whether because AI demand plateaus or is reallocated, or because utility constraints delay the activation of contracted capacity, resulting in a significant period of revenue underperformance against the original financial model.
- Integration risks: where delivery is disaggregated, with owner-supplied kit and box equipment procured separately from the main construction contract, integration risk becomes acute. Individual packages may each be technically compliant in isolation, yet the facility fails to perform as an integrated system; the resulting disputes at acceptance can be protracted and costly, given the difficulty of attributing fault across multiple contracts and suppliers. Separately, contractors may pursue extensions of time and prolongation costs where delays are attributable to utilities, regulatory approvals, or owner-furnished long-lead items. They may also seek variation pricing where scope or specifications change during the build. Both occurrences are common in large data centre projects and can significantly increase total project cost, eroding the contingency built into the original budget.
- Financing risks: data centres are among the most capital-intensive infrastructure assets in the world, with costs and build times rising sharply, due to supply chain constraints, grid connection delays, labour shortages and planning bottlenecks. The scale of capital required makes pure equity funding impractical, and sponsors typically rely on floating-rate construction loans intended to be refinanced with cheaper, longer-term debt once the facility is complete, commissioned and generating contracted revenue. With this model, refinancing is not guaranteed. If construction costs overrun or the schedule slips, the asset may not be ready or fully let when the construction loan matures, leaving the sponsor seeking to secure debt from a position of weakness. Even if refinancing is available, it may come at a higher interest rate, a lower loan-to-value ratio, or with stricter covenants than originally modelled, all of which reduce the cash available to equity investors. In the worst cases, lenders may require the sponsor to repay a larger portion of the old debt than anticipated, forcing an additional equity injection that was not foreseen.
Downstream risks: claims arising from AI software and deployment
| Who this relates to: all sectors, but in particular organisations that develop, train or deploy AI models—including software developers, platform providers, data‑rich enterprises, and any business integrating AI into decision‑making processes in areas such as hiring, credit, insurance, customer screening or automated service delivery. |
A different set of litigation exposure is emerging from the software and application layers. This arises not from AI's physical footprint, but from the behaviour, outputs and deployment of AI systems themselves—including how models are trained, how they perform, and how they are integrated into business processes and consumer‑facing tools. As AI applications become more embedded in everyday commercial and consumer interactions, the potential for private disputes is increasing.
Trends for corporate stakeholders
'AI washing' securities claims
- A rapidly increasing volume of US securities lawsuits have alleged that a defendant company has misrepresented AI-related capabilities or efficiencies, or downplayed AI performance issues, to investors.
- For example, investors in Innodata have alleged that the company did not possess proprietary AI as claimed but instead relied heavily on offshore workers.
- No comparable cases have been brought in Australia; however, it is conceivable that such claims could be brought under market integrity, misleading or deceptive conduct or continuous disclosure laws.
AI securities fraud class actions
- Investors are also suing companies for alleged failures to disclose the competitive threat that AI poses to their business model.
- For example, a Los Angeles-based law firm is proposing a shareholder class action against Atlassian for failing to warn investors about the threat of AI to its software products, following a decline in the company's share price.
- Similar claims have also been launched against other software companies, including Monday.com.
Algorithmic discrimination claims
- A number of US lawsuits have alleged that the utilisation of AI tools in hiring processes has disadvantaged job applicants based on their race, gender, age or disability, in violation of anti-discrimination laws.
- For example, job seekers have filed a class action in California against HR tech company Workday, arguing that its AI recommendation system wrongfully discriminates against older applicants.
- Plaintiff law firms are also targeting companies that use AI to collect personal data from job applicants.
- For example, a class action filed in January in California alleges that AI hiring platform Eightfold scraped personal data of more than 1 billion workers, using this to rank candidates without required disclosures under the Fair Credit Reporting Act.
- No similar cases have been brought in Australia, but federal and state-based anti-discrimination legislation and the Fair Work Act 2009 (Cth) could conceivably support equivalent claims.
Breach of contract
- A number of US class action lawsuits have alleged that the utilisation of AI tools in business processes has resulted in companies failing to fulfil contractual obligations to consumers.
- For example, a proposed class action before the District Court in Minnesota alleges that UnitedHealth Group and naviHealth used an AI program called 'nH Predict' to wrongfully deny medical care coverage to patients, in breach of their insurance agreements.
No similar litigation has been pursued in Australia yet, but some local companies have already been required to issue refunds for AI-related service failures.
'AI washing' consumer claims
- A number of US consumer lawsuits have alleged that a defendant company has misrepresented AI-related product capabilities to consumers.
- For example, a class action filed in California alleges that Apple deceptively marketed the iPhone 16 as including its new 'Apple Intelligence' AI features, despite those features remaining non-functional.
- Although no private litigation claims have been commenced in Australia, both the ACCC and ASIC are actively monitoring marketing campaigns for potential 'AI washing' issues.
Chatbot misrepresentation claims
- Companies can be held liable for false or misleading representations their chatbots give to users.
- In a well-known Canadian case, Air Canada was ordered to pay damages and fees to a passenger its AI chatbot misled, after the bot wrongly advised on bereavement fares and the airline was held liable for the misinformation.
Algorithmic pricing claims
- US courts are seeing a growing number of cases alleging that AI-driven pricing tools enable illegal price collusion.
- For example, property management software provider RealPage and several property managers that used its pricing algorithms have faced more than thirty private lawsuits since 2022. The claims allege that the software facilitated collusion among landlords over rental pricing.
Algorithmic discrimination cases
- A number of US lawsuits have alleged that the utilisation of AI tools in business processes has disadvantaged consumers based on their race, gender, age or disability, in violation of anti-discrimination laws.
- Sectors that have been involved in such claims include:
- insurance, on the basis that AI tools have screened for fraud based on biometric data, behavioural data and housing data that function as proxies for race; and
- housing, on the basis that AI tools have screened for creditworthiness based on data that function as proxies for race.
- In Australia, the 2024 case of Tickle v Giggle for Girls Pty Ltd found indirect discrimination where AI facial recognition software identified a transgender woman as male, preventing her from registering for the app. Even though a human reviewed the AI's output, the decision signals real risk for any company using AI in ways that produce discriminatory outcomes.
Product liability cases
- The US is seeing a rise in product liability claims against AI developers, including cases involving wrongful death.
- In March, two landmark jury decisions found Meta and Google liable for enabling child exploitation on their platforms. Though not strictly AI cases, they are relevant to AI-driven platform design: plaintiffs successfully argued that both companies harmed young users through deliberate design choices, including the use of AI-driven algorithms. It is conceivable that similar cases will be brought against generative AI companies.
- The Federal Government's recent review of AI and the Australian Consumer Law (the ACL) found that the ACL may apply to some AI systems—flagging genuine litigation risk ahead.
Privacy cases
- A number of US lawsuits allege that AI platforms are collecting personal data through chatbots without proper disclosure or consent, in breach of privacy laws.
- For example, a proposed federal class action in the US alleges that Otter.ai's AI-powered transcription service records private meetings on Zoom, Google Meet and Microsoft Teams without participants' consent, in violation of state and federal wiretap and privacy laws.
- Although no private cases have been initiated in Australia, there have been several investigations by the Australian Information Commissioner, and its state equivalents, into breaches of privacy laws through the use of AI. These investigations indicate that future litigation in Australia is a genuine and growing possibility, particularly with the introduction of the statutory tort for serious invasions of privacy.
Next steps
With AI's expanding ecosystem, companies that use it—and this may soon be almost all companies—are at increasing levels of ever more complicated risk. This will only accelerate, in Australia and elsewhere.
We will continue to keep you updated on developments, and please don't hesitate to contact any of the people below if you would like to discuss the issues raised in this Insight.


