Teaching students to think with AI – not cheat with it
Expert commentary by CQUniversity’s Dr Mahmoud Elkhodr and Professor Ergun Gide
Universities across Australia are wrestling with the rapid rise of generative artificial intelligence. Some have tried detection software. Others have restricted its use. But at CQUniversity, researchers have taken a different approach: redesigning assessment so students learn to work with AI critically – and still prove they can perform independently.
Over the past two years, CQUniversity academics have developed and tested a structured teaching model called Structured AI-Guided Education (SAGE). It has been trialled across multiple campuses and disciplines, examined in peer-reviewed studies, and recognised by the national regulator as an example of good practice. The aim is simple – turn AI from a shortcut into a teaching tool without compromising integrity of degrees.
The problem isn’t going away
Recent media coverage has painted a bleak picture of AI in higher education. Students admit to outsourcing assignments to tools like OpenAI’s ChatGPT. Some academics estimate large proportions – more than 80 per cent of student work – may now involve AI.
At the same time, universities have struggled to police its use. Thousands of students have been flagged for suspected AI misconduct at institutions such as Australian Catholic University – with many cases later dismissed. Curtin University has abandoned AI detection software over concerns about accuracy.
The message is clear. Detection tools are unreliable and banning AI is unrealistic. The technology is already embedded in workplaces across law, engineering, health, finance and cybersecurity. Employers increasingly expect graduates to know how to use it well.
So, the real question for universities is not whether students should use AI, but how to teach them to use it responsibly – and how to make sure they actually understand what they are doing.
What the regulator says
Australia’s higher education regulator, the Tertiary Education Quality and Standards Agency (TEQSA), has urged universities to rethink assessment rather than rely on detection.
In its 2025 guidance on assessment reform, TEQSA called for a balance between “open” tasks – where students can use AI to build skills – and “secure” tasks, where they must demonstrate their own knowledge under supervised conditions. Both are essential. One develops capability. The other verifies it.
Several universities have responded at a policy level. The University of Sydney introduced a “two-lane” assessment model separating secure and open tasks. Monash University launched a program-wide review of assessment in light of AI.
But policy alone does not show lecturers how to teach with AI in a way that builds genuine skill. That is where SAGE comes in.
Teaching students to think with AI
SAGE is built around a six-step learning cycle. Instead of warning students about AI misuse, it guides them through structured practice.
First, students use AI to generate a draft – an essay, risk assessment, policy document or similar task. Early on, lecturers provide the prompt so everyone starts at the same point. Later, students design their own.
But the AI output is never the final submission. It is the starting point.
Next, students evaluate what the AI produced against real standards – industry frameworks, professional guidelines or credible research. A cybersecurity student might compare an AI-generated policy against the NIST Cybersecurity Framework. A nursing student might check clinical advice against published health guidelines.
This is where critical thinking becomes practical. Students quickly discover that AI can sound confident while being incomplete, outdated or simply wrong.
In the third step, they refine their work. They correct errors, fill gaps and explain exactly what they changed and why. If the AI overlooked jurisdictional requirements, ignored cultural safety considerations or invented references, students must identify and fix it.
Then comes a shift in perspective. Students ask the AI to act as a critic – perhaps as a senior auditor or clinical supervisor – and challenge their revised work. The AI now questions them. This prompts a second round of analysis and improvement.
After that, students reflect on the entire process. What did the AI do well? Where did it fail? Over time, they begin to anticipate its weaknesses before they occur – an essential professional skill.
Finally, students must defend their competence in a short, supervised task. This might be a timed risk-scoring exercise, a live code walkthrough or a focused oral explanation. The format varies by discipline, but the principle is the same: students must demonstrate independent understanding.
This final step provides institutional assurance that learning is real and not outsourced.
Does it work?
SAGE has been validated through seven peer-reviewed studies involving more than 500 students across cybersecurity, data analytics, systems analysis and professional communication delivered at four CQUniversity campuses. Students demonstrated substantial improvements in performance (up to 87 per cent) and critical thinking (up by 78 per cent) within a single semester. Many reached a point where they could confidently accept, modify or reject AI suggestions based on evidence and professional judgement.
The framework has been accepted into TEQSA’s Generative AI Knowledge Hub as a sector-wide example of good practice.
Why this matters
The debate about AI in universities often focuses on cheating. But that frame misses a bigger issue.
A graduate who can type a prompt into ChatGPT has no special advantage. A graduate who can test AI output against professional standards, identify risks, detect false information and explain their reasoning under supervision is far more valuable.
AI is not replacing expertise. It is amplifying the need for judgement.
Universities now face a choice. They can keep trying to catch students using AI, or they can redesign teaching so students learn to use it wisely – and prove they can think for themselves.
CQUniversity SAGE framework is freely available for educators and institutions interested in adopting or adapting it at: https://doi.org/10.5281/zenodo.18383951.
Dr Elkhodr and Professor Gide are based at CQUniversity's School of Engineering and Technology at its Sydney campus.
