
AI can summarise 20 papers in seconds.
And at first glance, that feels like a breakthrough.
Tools like ChatGPT, Claude, and Perplexity make it possible to move quickly through large volumes of literature — extracting key findings, clustering themes, and producing clean, readable summaries almost instantly.
But there’s a question we need to sit with:
What kind of literature review are we doing when we rely on summaries like this?
The shift from engagement to extraction
A literature review is not just a process of gathering information.
It is a process of positioning yourself in relation to knowledge.
It involves noticing tensions between authors, identifying gaps and contradictions, tracing how ideas have developed over time, and deciding what matters — and why.
This is not just technical work. It is interpretive work.
When we move too quickly into AI-generated summaries, something subtle begins to shift.
We move from reading, questioning, and interpreting
to receiving, scanning, and accepting.
The literature becomes something to process, rather than something to engage with.
When summaries start to replace understanding
AI summaries are often fluent, structured, and persuasive.
They give the impression that the material has been “covered.”
But fluency is not the same as understanding.
And summaries — by design — flatten complexity.
They tend to:
- Smooth over disagreements
- Prioritise dominant narratives
- Remove uncertainty and ambiguity
- Present conclusions without the analytic journey that led to them
Over time, this can create a false sense of familiarity with a body of work.
You recognise the themes.
You’ve seen the key points.
But you haven’t actually worked through the literature — followed the arguments, sat with the tensions, or made sense of how the ideas connect.
The risk isn’t just error, it’s also shallow positioning
A lot of discussion around AI focuses on accuracy:
- Hallucinated citations
- Incorrect claims
- Unreliable sources
These are important concerns.
But there is a quieter risk that often goes unnoticed:
Shallow engagement leads to shallow positioning
If your understanding of the literature is based on summaries:
- Your arguments are more likely to be generic
- Your synthesis may lack depth
- Your contribution may feel less distinct
And this is often where issues surface, not in obvious errors, but in feedback like:
- “This needs stronger conceptual clarity”
- “The argument is underdeveloped”
- “The positioning could be more critical”
The issue isn’t AI itself
AI doesn’t change what a literature review is, but it makes it easier to skip the work that actually produces understanding.
This does not mean AI has no place in literature work.
It can be extremely useful for:
· Initial scoping
· Locating papers
· Organising material
· Identifying broad patterns
But the role it plays matters.
Because a literature review is not just about what is in the literature.
It’s about:
· How you interpret it
· How you position yourself within it
· And how you build an argument from it
AI can support parts of that process.
But it cannot do the interpretive work for you.
A different way to think about it
Instead of asking:
“Can AI summarise this for me?”
It may be more useful to ask:
“Where in this process do I need to be doing the thinking?”
That might mean:
- Reading key papers in full before turning to summaries
- Using AI to compare interpretations, not replace them
- Treating outputs as starting points for questioning, not endpoints
The goal is not to reject the tool.
It is to remain critically engaged in the work that actually matters.
Final thought
AI can make literature reviews faster.
But speed is not the goal of qualitative research.
Understanding is.
And understanding takes time, attention, and interpretation.
So the question isn’t whether to use AI.
It’s:
How do we use it without losing the depth of engagement that gives our work its strength?
⭐ I’ll be exploring this in a free session on how AI fits into the qualitative research workflow and how to use it without losing depth in your work.
You’re very welcome to join me.
Free webinar
Using AI in Qualitative Research: What you gain and what you risk
Tuesday 21 April 2026 | 12–1pm AEST
Live on Zoom
