
Used well, AI can play a valuable supporting role, especially during early stages of analysis. It helps manage volume, notice repetition, and create structure across large datasets.
Can algorithms read between the lines?
When I ran an interview transcript through ChatGPT-4, the model returned a tidy list of themes: ‘isolation,’ ‘nostalgia,’ ‘identity.’ All technically accurate.
But it missed the most powerful moment in the interview — an emotionally complex reflection on self-erasure.
The AI didn’t flag it, didn’t dwell on it, didn’t even seem to notice.
That’s the crux: AI sees patterns, but it doesn’t grasp meaning. And that difference lies at the heart of qualitative research. Let’s explore this fundamental difference using reflexive thematic analysis as an example.
Reflexive Thematic Analysis vs AI: What Kind of Analysis Are We Doing?
Drawing on Braun and Clarke’s reflexive thematic analysis (e.g., 2006, 2024), qualitative analysis is not about cataloguing or counting. It is about constructing meaning, interpreting complexity, and acknowledging the researcher’s subjectivity as central to the process.
AI, on the other hand, often mimics what Braun and Clarke call ‘small q’ qualitative research: descriptive, topic-based, and rooted in assumptions of objectivity and replicability.
When AI returns concise summaries or lists of repeated phrases, it is not constructing themes. It is offering topic summaries — useful in some cases, but fundamentally different from the interpretive work required in RTA (and fundamental to ‘Big Q’ qualitative research).
A Closer Look: Can AI Meet the Standards of RTA?
RTARG Principle | Can AI Do This? | Why It Matters |
Reflexive openness | No | AI has no positionality or critical self-awareness |
Interpretative depth | No | It cannot engage with contradiction, metaphor, or layered meaning |
Thematic construction | Limited | AI generates descriptive groupings, not themes built around central concepts |
Theoretical coherence | No | AI does not hold an epistemological stance or analytic framework |
Quality evaluation | No | It cannot judge transparency, coherence, or reflexive integrity |
AI outputs do not simply lack nuance — they lack the foundational principles fundamental to ‘Big Q’ qualitative research and reflexive thematic analysis.
What AI Can Actually Do — and Where Human Interpretation Comes In
Here’s where it can be particularly useful:
- Code Suggestion and Sorting
Tools like QualiGPT and MAXQDA’s AI Assistant support researchers by proposing categories and surfacing common codes. This can be great for generating summaries of interviews or open-ended responses — helpful for team discussions or drafting non-technical reports. - Early-Stage Theme Clustering
Carvalho et al. (2024) and Zhang et al. (2023) show that LLMs can consistently extract dominant topics from open-ended data, particularly with clear prompts. - Speed and Efficiency
Morgan (2023) found that ChatGPT completed a basic thematic analysis task in two hours — compared to 23 hours of manual coding. The speed is real, but while AI is effective at highlighting what’s said often, it’s up to the researcher to decide what’s important, and why.
AI can support the process, but it can’t be the analysis.
Where AI Falls Short
Even when AI offers helpful scaffolding, there are limits that matter deeply:
- It cannot be reflexive
RTA requires researchers to reflect on their own values, assumptions, and positionality. This is not a technical step — it is foundational. AI cannot do this. - It misses contradiction and ambiguity
A participant says, ‘I felt powerful and invisible at the same time.’
AI might flag this as an ‘emotional response.’
A reflexive researcher sees ambivalence, identity tension, and possibly survival strategies within silence. - It overlooks low frequency but important data
Carvalho et al. (2024) found that AI routinely misses subtle, marginal, or emotionally charged content, precisely the data points that often carry important narratives and rich, context-bound meaning. - It describes, it doesn’t interpret
Hitch (2024) observed that AI generated codes skew toward surface-level content. AI may tag ‘anxiety,’ but not the systemic or relational causes behind it. So while AI gives you what’s said often, the researcher has to make sense of why it matters.
Side-by-Side Comparison
Excerpt | AI Output | Human Interpretation |
‘They said they supported me. But when I actually spoke up, they disappeared.’ | Support inconsistency | Institutional betrayal, emotional abandonment |
‘I used to think I had to choose between being respected and being myself.’ | Identity conflict | Social performativity, internalised tension |
AI gives you categories. The researcher gives meaning.
Ethics Are Not Separate from Method
Ethics are not separate from method in qualitative research. Analytic approaches are always value laden, even when researchers don’t explicitly state their theoretical position.
When we treat AI-generated outputs as interpretive themes, we risk what RTARG calls ‘positivism creep’: a drift away from the foundational values of qualitative inquiry. This is not just a methodological issue, it’s an ethical one.
How codes and themes are developed shapes whose voices are heard and whose are marginalised or erased. AI tends to favour dominant narratives unless explicitly challenged (Acheampong and Nyaaba, 2024), and it can produce fluent, confident sounding outputs even when they’re wrong (Roberts et al., 2024). Over reliance on these systems can also erode core research skills like reflexivity and interpretation (Marshall and Naff, 2024).
When AI appears authoritative, it becomes easier to stop questioning – and that’s when qualitative integrity begins to erode.
Final Thought: Use AI Critically, Stay Grounded in Meaning
AI can help structure data, highlight repetition, and speed up early stages of analysis. But it cannot interpret, theorise, or engage with complexity. That work remains ours. The task is not to reject AI, but to use it well — as a tool, not a thinker. When we approach AI with critical awareness, we can draw on its strengths while holding firm to the values that define qualitative research: reflexivity, depth, and a commitment to meaning.
Join the Masterclass: Build Your AI-Ready Qualitative Practice

If this post raised more questions than it answered, that’s the point — because navigating AI in qualitative research isn’t about choosing sides. It’s about learning how to use these tools well.
In the AI and Qualitative Research Masterclass, we’ll move beyond the hype and into practice — exploring how to work with AI tools in ways that are efficient, ethical, and grounded in the values of qualitative inquiry.
Over two days, you’ll learn how to:
- Use AI to support early analysis — from clustering data to surfacing patterns
- Critically evaluate AI outputs to maintain depth and nuance
- Streamline workflows like transcription, literature synthesis, and writing
- Create visuals, summaries, and abstracts with the help of AI
- Integrate AI without losing your voice, reflexivity, or interpretive insight
What’s included:
12 hours of hands-on training, guided practice with leading AI tools, preparatory questions to sharpen your focus, a certificate of completion, and the option to book one-on-one follow-up support.
Dates: 9–10 June 2025
Format: Live and online
Cost: $525 AUD per person
Register now at: www.clairemoran.com/training
Or contact: claire@clairemoran.com for customised training or upcoming dates.
0 Comments