Smart glasses make students better cheats – but poorer learners
A new wave of AI-enabled smart glasses is set to disrupt school and university exams, with one Australian expert warning the technology could render traditional testing ineffective – and expose a deeper flaw in how learning is assessed.
CQUniversity’s Educational Neuroscience expert Professor Ken Purnell says the devices, which look like ordinary eyewear, can quietly capture exam questions and return answers within seconds via audio or an in-lens display.
“Students can simply look at a question, have it processed by AI, and receive an answer almost instantly,” Professor Purnell said.
“From the outside, they are indistinguishable from prescription glasses – making them extremely difficult to detect, even in supervised exams.”
Devices such as Ray-Ban Meta smart glasses and similar technologies are already entering the mainstream, with reports of students overseas renting them for as little as a few dollars a day to receive live answers during exams.
The implications extend beyond education. A recent London court case revealed a witness receiving real-time coaching through smart glasses during cross-examination – highlighting the technology’s growing use in high-stakes environments.
Professor Purnell says this is not just a new form of cheating, but a fundamental shift.
“This isn’t just about cheating though – it exposes a flaw in how we evaluate learning and what we think qualifications actually show,” he said.
“Learning relies on retrieval practice – the effortful process of recalling and applying knowledge. When that process is bypassed, memory consolidation is weakened and students shift from knowing to simply accessing information.”
He warns recall-based assessments are becoming increasingly unreliable.
“If AI can answer a question instantly, it is no longer a strong measure of learning,” he said.
The rise of smart glasses also raises concerns for employers and industries that rely on qualifications as evidence of competence.
“We risk reaching a point where a test result no longer guarantees what a person actually knows or can do.”
However, Professor Purnell says the disruption presents an opportunity to rethink assessment.
“More resilient approaches include oral examinations, applied problem-solving, and authentic assessment tasks that require students to demonstrate how they use knowledge in real-world contexts.”
Professor Purnell is already applying this model in his teaching with working professionals across education, healthcare and aviation.
“Students apply their learning directly to their professional setting, complete structured reflective journals, and submit a video engaging a real audience – whether teachers, clinicians or pilots,” he said.
“This makes it far more difficult to outsource thinking and provides a clearer picture of genuine understanding.”
With AI technology advancing rapidly, Professor Purnell is urging educators, institutions and policymakers to act.
“This is no longer a future problem – it is already happening,” he said.
“We need to move beyond assessment models designed for a pre-AI world. The goal is not to test who can remember the most under exam conditions – it is to develop people who can think, apply knowledge, and adapt in complex environments.”
