Introduction
When Deloitte admitted to using a generative AI model (GPT-4o via Azure OpenAI) in its $440,000 compliance review for the Australian government, the issue went far beyond faulty references. The report, initially containing fabricated legal cases and academic citations, revealed a deeper institutional vulnerability: a quiet shift from thinking to outsourcing thought. Even after corrections, Dr. Christopher Rudge noted that instead of replacing the hallucinated references with substantiated sources, the updated version contained “five, six or seven or eight” new ones in their place, suggesting the original claims were not grounded in specific evidentiary sources (Dhanji, 2025; Karp 2025).
This case signals a broader trend: institutions are not just automating tasks, they risk relinquishing authorship over the very claims they are making.
From Tool to Thought Partner
Generative AI blurs the line between instrument and collaborator. Tools like GPT-4o don’t just execute commands; they anticipate, suggest, and complete. While powerful, this shifts the cognitive centre of gravity. In his essay Augmentation or Abdication, Carlo Iacono warns that tools which “think back” may prompt users to slide from authoring to curating, without noticing the transition (Iacono, 2025).
Drawing on Iacono’s framing, let’s consider what it means to retain authorship and agency when working alongside generative models.
What We Lose: Epistemic Agency and Cognitive Sovereignty
Two interlinked concepts help us navigate this challenge:
- Epistemic agency: the ability to form, justify, and defend one’s own knowledge claims.
- Cognitive sovereignty: the authority to direct and take responsibility for the reasoning process itself.
These two ideas speak to the active role researchers, analysts, and institutions must play in producing knowledge. Epistemic agency isn’t just about having opinions—it means being able to explain how you arrived at a conclusion, what evidence supports it, and why alternative interpretations were considered or rejected. Cognitive sovereignty goes one step further: it’s about owning the process of reasoning itself, rather than deferring to external tools or models.
In the Deloitte report, both were compromised. The references did not just mislead; they pointed to a more serious issue: that the claims made in the report may never have been anchored to identifiable sources at all. This undermines the ability to interrogate or contest those claims.
The Plausibility Trap
Language models are optimised for coherence, not correctness. As Kalai et al. (2025) explain, hallucination is not a bug; it is an emergent behaviour of models trained to predict text rather than validate truth. Without safeguards, their outputs can appear authoritative while resting on statistical sand.
Institutions that incorporate these models without robust human verification expose themselves to epistemic drift: a slow but significant decoupling of claims from knowledge.
Fluency Without Foundations
AI can enhance workflows such as drafting, summarising, proposing, but we are in dangerous territory when it replaces thinking and deliberation. When convenience leads to dependence, and dependence leads to unexamined acceptance, we’ve crossed the line into cognitive outsourcing.
The Deloitte review illustrates this danger. It bore the form of expertise but lacked the but lacked the evidentiary rigour to support its claims. As Labor senator Deborah O’Neill put it, “Deloitte has a human intelligence problem” (Dhanji, 2025).
What Does Responsibility Look Like Now?
Retaining cognitive sovereignty in AI-assisted work requires more than safeguards; it demands deliberate practice. These principles are not just technical fixes but epistemic commitments:
- Transparent disclosure of where and how AI was used—not as a disclaimer, but as part of responsible authorship;
- Human verification of all AI-generated content. Period.
- Documentation of reasoning, to ensure knowledge remains traceable, contestable, and open to scrutiny;
- Epistemic humility, acknowledging the limits of both tools and teams, resisting the illusion of automated certainty.
Researchers and institutions that adopt these practices won’t just avoid embarrassment; they’ll strengthen the integrity of their work in a rapidly changing knowledge environment.
Closing Reflection
The Deloitte case is a warning, a signal flare in the growing haze of AI-assisted expertise. It urges us to distinguish between fluency and knowledge, between output and understanding. Generative AI can support our work, but only if we remain epistemically awake, continuously interrogating how claims are constructed and by whom.
Cognitive sovereignty is not a luxury; it is a precondition for credible knowledge. To relinquish it is to invite not just error, but a slow corrosion of institutional legitimacy, intellectual integrity, and trust in our work.
References
Dhanji, K. (2025, October 6). Deloitte to pay money back to Albanese government after using AI in $440,000 report. The Guardian. https://www.theguardian.com/australia-news/2025/oct/06/deloitte-to-pay-money-back-to-albanese-government-after-using-ai-in-440000-report
Iacono, C. (2025). Augmentation or Abdication. The Captain’s Chair. https://hybridhorizons.substack.com/p/the-captains-chair
Kalai, A.T., Nachum, O., Vempala, S.S., & Zhang, E. (2025). Why Language Models Hallucinate. arXiv:2509.04664v1.
Karp, P. (2025, October 13). How one academic unravelled Deloitte’s AI errors. The Australian Financial Review. https://www.afr.com/politics/how-one-academic-unravelled-deloitte-s-ai-errors-20251013-p5n224

0 Comments