
Most conversations about AI in research focus on accuracy.
Are the citations real?
Are the summaries correct?
Can the tool be trusted?
These are valid concerns.
But they’re not the most significant risk.
There is something quieter happening and far easier to miss.
The confidence trap
AI outputs are often fluent, structured, and convincing.
They:
- suggest themes
- generate summaries
- cluster patterns
- draft interpretations
And they do it quickly.
The result is work that looks coherent.
But coherence is not the same as insight.
And when something sounds right, it’s much easier to accept it without questioning how it was produced.
A subtle shift in the work
Qualitative research is not just about identifying patterns.
It is about interpreting meaning.
It involves:
- sitting with ambiguity
- working through partial understandings
- questioning your own assumptions
- making sense of someone else’s interpretation of the world
This process is not linear.
It is slow, iterative, and often uncomfortable.
And that discomfort is not a problem.
It is where the analysis happens.
When support becomes substitution
AI can be useful in many parts of the research process.
But the line between support and substitution is not always obvious.
It can look like:
- accepting suggested codes before engaging with the data yourself
- using summaries as a substitute for reading
- incorporating AI-generated interpretations into your analysis without fully interrogating them
None of this is necessarily deliberate.
But over time, something shifts.
You begin to rely on the output.
You begin to trust it.
And gradually, the locus of thinking starts to move.
Cognitive sovereignty
This is where the idea of cognitive sovereignty becomes important.
At its core, it is a simple question:
Who is doing the thinking?
In qualitative research, the researcher is not separate from the analysis.
Your interpretation, your judgment, your reflexivity — these are not optional extras.
They are central to the work.
When we start outsourcing parts of that process, even in small ways, we risk weakening our connection to the analysis itself.
And with that, our confidence in our own thinking.
The risk isn’t just technical. It’s epistemic
This isn’t just about using a tool incorrectly.
It’s about a potential shift in how we understand what counts as analysis.
AI is designed to:
- recognise patterns
- generate probable responses
- reflect dominant structures in data
Qualitative research, on the other hand, is concerned with:
- meaning
- context
- interpretation
- multiple possible realities
When these two logics are blurred, it can quietly reshape what we take to be “rigorous” work.
Not because we’ve decided to change our standards.
But because the tools we use begin to influence them.
Staying with the work
None of this means rejecting AI.
But it does mean being more deliberate.
It means noticing:
- when something feels too easy
- when an answer arrives too quickly
- when the work no longer feels like thinking
Because those moments matter.
They are often the point where support becomes substitution.
Final thought
AI can support qualitative research.
But it cannot replace the interpretive work that defines it.
The challenge is not simply learning how to use these tools.
It is learning how to use them without losing the thinking that makes the work meaningful.
⭐ I’ll be exploring this in a free session on how AI fits into the qualitative research workflow and how to use it without losing depth in your work.
You’re very welcome to join me.
Free webinar
Using AI in Qualitative Research: What you gain and what you risk
Tuesday 21 April 2026 | 12–1pm AEST
Live on Zoom
