Deloitte to refund government after AI errors found in welfare compliance review
Deloitte will refund part of a A$440,000 consultancy fee after an assurance report for the Australian government’s social security compliance framework was found to contain AI-generated inaccuracies and legal misattributions.

- Deloitte's Australian arm will partially refund a A$440,000 consultancy fee after a government report was found to contain AI-generated errors.
- The report used Microsoft’s Azure OpenAI and included fabricated citations, including a quote falsely attributed to a federal court judge.
The Australian government has confirmed that Deloitte Australia will refund part of a A$440,000 consultancy fee after delivering a flawed independent assurance review on the Targeted Compliance Framework (TCF), which governs key decisions in the country’s welfare system.
The 237-page report, published on the Department of Employment and Workplace Relations (DEWR) website in July 2025, was later found to include fabricated academic references, false citations, and a quote wrongly attributed to a federal court judge. These errors were linked to content generated using a large language model, Azure OpenAI.
The report was part of DEWR’s Targeted Compliance Framework Integrity Assurance Program, established to review whether decision-making processes under the TCF align with legislative requirements, particularly for decisions that reduce or cancel social security payments.
On 5 July 2025, prior to the public revelation of the report’s issues, the department paused decisions under section 42AG of the Social Security (Administration) Act 1999, which governs circumstances where a person fails or refuses to accept suitable employment.
According to a departmental statement released on 3 October 2025, further suspensions of decision-making are in effect for provisions under sections 42AF(2)(d), 42AH, and 42AM, which relate to persistent mutual obligation failures and failure to reconnect with employment services.
These suspensions reflect growing concerns that the administration of the TCF may have resulted in unlawful or procedurally flawed social security payment decisions.
Deloitte acknowledged in a report update that its original assurance report contained errors due to AI-generated content. While it claimed that human reviewers had refined the content, the firm conceded that some references and quotes were incorrect. A revised version, dated 26 September 2025, was quietly released and includes a disclosure about the use of generative AI.
“The updates made in no way impact or affect the substantive content, findings and recommendations in the report,” Deloitte stated.
Christopher Rudge, a legal academic at the University of Sydney, first raised the alarm after discovering citations to non-existent publications, including a fabricated book falsely attributed to Professor Lisa Burton Crawford.
“I knew it was either hallucinated by AI or the world’s best kept secret,” Rudge told the Associated Press.
The department confirmed that Deloitte conducted the independent assurance review, which examined whether the TCF’s IT systems, policies, and operational practices aligned with legislation. The flawed version was initially released on 14 August 2025, with a corrected version and accompanying statement later replacing it.
The flawed report has had real-world consequences. The department confirmed that it has already paid compensation or backpay to individuals whose social security payments were incorrectly cancelled under the now-paused decision pathways. It is also actively considering further compensation for other cases currently under review.
Greens Senator Barbara Pocock criticised Deloitte, stating that the firm “misused AI and used it very inappropriately,” adding that these were errors that would not be tolerated from a first-year university student.
A spokesperson from Deloitte Australia told the Associated Press that the issue had been “resolved directly with the client,” without confirming the exact refund amount. The original contract was reportedly worth A$440,000.
To mitigate further risks, the department is implementing a series of measures including:
-
Strengthening assurance processes between DEWR and Services Australia to confirm all paused payment decisions remain suspended.
-
Mapping and documenting all processes that could impact payments, ensuring end-to-end legal compliance.
-
Limiting IT system changes to those deemed essential, thoroughly tested, and beneficial to participants.
-
Enhancing governance frameworks to ensure that all operational changes comply with the law.
The department also reiterated that no decision to resume payment reductions or cancellations will be made unless it is clearly demonstrated that such decisions are legally compliant.
The Deloitte case has contributed to growing scrutiny over the use of AI in consultancy and governance. In June 2025, the UK Financial Reporting Council warned that major firms, including Deloitte, were not adequately monitoring the impact of AI and automation on audit quality.
Globally, Deloitte has faced regulatory actions over audit and ethics failures, including:
-
A Rs 2 crore penalty from India’s NFRA in December 2024 for lapses in the audit of Zee Entertainment.
-
A US$20 million fine in September 2022 for audit violations by its Chinese affiliate.
-
A US$900,000 penalty in September 2023 for audit control failures in Colombia.
-
A CAD 1.5 million payment in 2024 in Ontario, Canada, for backdating audit documents.
While these past cases were not explicitly tied to AI, they reinforce ongoing concerns about quality assurance in large-scale consulting operations.
The Australian government has indicated it will tighten consultancy contracts, with likely new clauses governing the use of AI in deliverables, attribution, and legal accuracy, particularly in policy-sensitive domains such as welfare and compliance.