Generative artificial intelligence is reshaping how students learn, write and study – but according to CQUniversity’s Head of Educational Neuroscience, Professor Ken Purnell, the real risk is not the technology itself. It’s how uncritically we use it.
In a major new paper on chatbots and learning published this week, Professor Purnell argues that generative AI is neither magic nor the enemy – and that the way forward is not blind enthusiasm or outright bans, but clear education about how AI works, where it fails, and why humans must stay in charge of thinking.
“Chatbots don’t think. They don’t understand. And they don’t know what’s true,” Professor Purnell said.
“They are powerful tools for generating language, but they should never replace human judgment. When we outsource our thinking to a machine, learning suffers.”
Professor Purnell explains that today’s chatbots are trained on vast amounts of text, not real-world experience. While they can sound confident and persuasive, they don’t actually understand what they are saying.
“Fluency is not the same as accuracy,” he said.
“A chatbot can confidently give you an answer that is completely wrong – and unless you already know the topic, you might not realise it.”
This problem, known as ‘AI hallucination’, is particularly risky in education. Research shows some AI tools fabricate references, invent sources and present false information as fact, exposing students and institutions to serious academic integrity risks.
Beyond accuracy, Professor Purnell says there is a deeper concern: learning itself.
Drawing on neuroscience research, his paper highlights evidence that when students rely on AI to generate essays or arguments, their brains engage less deeply.
“Learning happens when students analyse, connect ideas and build arguments themselves,” he said.
“When AI does that work for them, those mental pathways don’t activate in the same way. The result is weaker understanding and poorer memory.”
In one recent brain-imaging study, students who used ChatGPT to write essays struggled to recall their own work just minutes later.
“That should concern every educator,” Professor Purnell said.
The paper is not anti-AI. In fact, Professor Purnell says chatbots can be valuable learning tools when used intentionally and transparently.
Used well, AI can help students:
“The key principle is simple,” Professor Purnell said. “AI should amplify thinking, not replace it.”
He argues the most urgent task for schools and universities is AI literacy – teaching students how to use AI responsibly, critically and ethically. This includes being transparent about AI use, independently checking claims and sources, treating AI output as a first draft, and using AI to support reasoning rather than outsource it.
“Students will encounter AI throughout their working lives,” he said.
“Our responsibility is to teach them how to use it well – not to pretend it doesn’t exist, and not to let it do their thinking for them.”
Professor Purnell also warns against both panic and hype, noting that large-scale AI adoption carries hidden costs, including environmental impact, data privacy concerns and long-term financial risks.
“The smartest path forward is measured, evidence-based and human-centred,” he said.
“Education must stay focused on what machines cannot do – think critically, make ethical judgments, and learn through real experience.”
His message is clear: “Don’t outsource your thinking to a machine. Use AI to help you think better, check harder, and decide more wisely. Human judgment is still irreplaceable.”
CQUniversity Australia is a trading name of Central Queensland University
ABN: 39 181 103 288
RTO Code: 40939
CRICOS: 00219C
TEQSA: PRV12073