ߣߣƵ

‘No defence’ against wearable AI in exams, researchers warn

AI glasses and other smart apparel may be impossible to keep out of exams, adding to universities’ woes about the future of assessments

Published on
April 30, 2026
Last updated
April 29, 2026
Girl watches solar eclipse
Source: iStock/Mauricio Salas

“Wearable” artificial intelligence (AI) is dissolving the last line of defence between traditional assessment and emerging technology, marking a “second wave” of the disruptive force that has confounded educators since the 2022 launch of ChatGPT.

Deakin University researchers say the new technology – AI-enabled glasses, earbuds, rings, pendants, watches and hearing aids – exacerbates the “wicked problem” of assessment in the age of large language models.

The gadgets, often indistinguishable from ordinary apparel, provide perceptual enhancements like live captioning, language translation and object recognition. Many also offer health monitoring services, helping users manage chronic conditions by tracking indicators like blood pressure, heart rate and gait.

Other functions include cognitive aids such as information retrieval, scene interpretation and conversation coaching – all potentially helpful for cheating. The devices can be controlled silently through gestures or muscle movement, leaving observers oblivious to their use and even their existence.

ߣߣƵ

ADVERTISEMENT

Writing in the journal , the researchers say the new technology puts the final nail in the coffin of the already spurious argument that assessment can be rendered “immune” to generative AI through invigilated exams, supervised performances or interactive oral assessments.

This is because wearable AI is “transparent” in two senses: users see “through” it and there is no obvious “signature” of its use, while observers cannot tell when it is being activated.

ߣߣƵ

ADVERTISEMENT

While some universities have banned wearable AI from exams, the team argues that prohibition is futile. “A rule that cannot be enforced does not preserve evidential confidence,” the paper argues. “It produces an enforcement illusion.

“The issue is not simply that some students may break the rule. It is that the institution can continue to speak as if the assessment space is AI-free without possessing a reliable basis for claiming [so].”

Prohibition would also elevate academics’ surveillance responsibilities from policing students’ work to policing students themselves, as they tried to determine whether someone’s glasses or religious pendant had AI capabilities.

Lead author Thomas Corbin said assessors’ suspicions were already aroused by students’ use of words like “therefore” or punctuation like em dashes. “What happens when a Muslim student walks in with a headdress and I need to be suspicious about what they might have underneath, or when a student [has] hearing aids…and I need to question if they’re really required?

ߣߣƵ

ADVERTISEMENT

“That’s a world that no teacher…wants to live in. So the solution is, let’s think about assessment in ways that don’t require separating the student from technology. If the problem is how [to] do exams when students might be wearing AI that we can’t detect, the solution might just be to stop doing exams.”

Corbin said separating students from the technology at the point of examination “might not make any sense anyway”, given its “educationally virtuous” applications. “AI glasses, and wearable AI more broadly, can overcome a lot of barriers to entry…into higher education [through functions] like translating for international students or overcoming hearing barriers.”

The technology presents other issues for academics, who may have to grapple with the idea that “a lot of their students are recording what they say at all times”. Teachers themselves may need to use AI glasses to record students’ oral exams and ensure consistency of grading – introducing “impossible” security and privacy issues for universities.

Corbin said some people in higher education considered wearable AI a “gimmick”. Others were unaware of the technology, even though Ray-Ban’s manufacturer had of its Meta AI glasses last year. “These things are going to be bloody everywhere.”

ߣߣƵ

ADVERTISEMENT

He said that before 2022, experts had warned about the “incoming effects” of large language models on higher education, but few had listened. “Then we got the chaos of reacting to ChatGPT. We can’t survive [that] again. This is not science fiction any more. If this is the way things are trending, how do we get in front of it now? Otherwise, the horse is going to bolt.”

john.ross@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Related universities

Reader's comments (3)

There is one safeguard you overlook. Students. They might simply choose not to use it. The incidence of students trying in invigilated cheat in exams is low. On the one hand severe penalties act as a deterrent. You can also take the view that most students are honest and see exams not just as hurdles to jump on the way to graduation but as challenges that test and strengthen their ability. Given the option of using "AI"many students might respond tht they prefer to trust in their own ability rather than risk a possibly flakey tool where that produces outcomes that are not always reliable. Some will be drawn to it, but their weaknesses might trip them elsewhere along the journey. Case in point "open book" exams. Even when this allowed. Some students prefer to have the knowledge ready in their heads rather than waste time wading through text to find something. "AI" might prove to be a similar trap for the underprepared.
"Open book" testing has already turned out to be a snare and a delusion, for the reasons cited in the initial astute comment. AI has proven capable of such unpredictable acts of stupidity that I am surprised anyone wants to take the risk. As always, the cheaters cheat themselves.
new
There is no single answer to the problems of cheating with AI, and good solutions will vary across disciplines. For instance, oral assessment (good for assessing students' understanding, rapid and better quality feedback, efficient and not difficult to implement), signal jammers in assessment spaces (deployed and used to minimise 'tech escalation' wars), severe penalties (effective if processes are clear and consistent actions). In disciplines where there is a professional framework and codes of practice, the disclosure of a finding of misconduct in a student reference is an additional deterrent. However, it is not just a student issue: the very large increase in papers submitted to journals in the last 3-4 years, and of papers rejected because of issues such as fabricated references, suggests that the problems are more widespread. It is time to take a hard look at the criteria and expectations for academic advancement and esteem to focus clearly on high quality work, based on wider criteria than publication alone.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT