ߣߣƵ

Universities must respond to students’ emotional reliance on AI

If a student feels remembered by a machine but overlooked by humans, something in the educational contract has broken, says Agnieszka Piotrowska

Published on
December 12, 2025
Last updated
December 12, 2025
A robot psychologist advises a student
Source: elenabs/Getty Images

One of my research students told me recently, almost apologetically, that he sometimes turns to ChatGPT “as an emotional crutch”. He said it seemed to understand him better than his therapist. When I asked why, he said, “It remembers me, my problems and my stories better.”

He did not tell me which model he used. I did not ask. We both felt faintly embarrassed, and I am sure this conversation was only possible because psychoanalysis is one of my core disciplines. Students are not supposed to form emotional attachments to software. Academics are not supposed to recognise the loneliness that makes such attachments imaginable. And yet here we are.

Last week marked the third anniversary of ChatGPT’s public release. Three years in, the conversation remains fixated on plagiarism and productivity. But something else has been unfolding, largely unexamined: AI’s use as a therapist.

Not every student uses AI this way. But some do. They confide in it, soothe themselves with it and ask questions they are too ashamed to ask their peers, tutors or counsellors. The more troubling issue is not their reliance on a machine. It is the profound lack of human attention that drives them there, and the persistent shame that still surrounds human entanglements with AI.

ߣߣƵ

ADVERTISEMENT

A by King’s College London, published in 2024, found that serious mental-health difficulties among undergraduates have nearly tripled since 2016–17. Student loneliness has risen at a similar rate. Nearly three-quarters of respondents reported feeling lonely at university, and a significant minority said they had no close friends at all. This is the background against which AI companionship becomes possible – and, for some, irresistible. We should not despair about it, but it is clear that institutional structures must broaden their focus beyond an obsession with plagiarism.

Young people are already speaking openly about their relationships with AI. In a recent , roughly 1,100 participants took part. Almost every question concerned earlier versions of ChatGPT. Why did version 4 feel more “human”? Could it be brought back? Why did version 5 seem distant? Reddit’s demographics tell their own story: 44 per cent of users are aged 18 to 29: the very group most likely to be studying in our institutions.

ߣߣƵ

ADVERTISEMENT

A September 2025 examined the Reddit community “MyBoyfriendIsAI” and found something striking: most members formed relationships with AI unintentionally. They opened ChatGPT for homework or work tasks and something else developed. As researcher Pat Pataranutaporn observed: “The emotional intelligence of these systems is good enough to trick people who are actually just out to get information into building these emotional bonds. And that means it could happen to all of us.”

We know that students are heavy users of AI. Seventy-three per cent of UK students now use AI tools weekly, and more than a third say they have used them for personal or emotional support. A 2025 found that 83 per cent of Gen Z respondents said they could form a meaningful connection with a chatbot and 80 per cent claimed they’d consider marrying one if it were legal. found about one in four young adults already believe AI partners could replace human relationships.

We can dismiss all this as pathological, but the more honest response is to recognise it as a symptom. When students feel remembered by a machine but overlooked by humans, something in the educational contract has broken.

And the risks are not theoretical. Eight have now been filed in the US involving suicide or severe emotional harm linked to ChatGPT. The pattern is almost identical. A young person asks for academic help. The dialogue becomes personal. The AI offers patient, fluent, comforting language that feels deeply responsive. The spiral tightens. And then something goes terribly wrong.

ߣߣƵ

ADVERTISEMENT

The issue is not malevolence. It is that current AI systems have no concept of harm, no sense of when to stop and no training in how to speak to someone who is vulnerable. They can generate therapeutic or despairing language with equal fluency. They can sound empathic without understanding the weight of empathy. No amount of filtering can fix this at the surface level.

Sam Altman, OpenAI’s chief executive, said in October that the company had made ChatGPT “pretty restrictive” when it came to mental health. He also promised that future tools would allow those limits to be loosened safely. The lawsuits appear to have provoked an about-turn on that, but the company is seemingly still going ahead with its “adult mode” for those who verify their age. What will this do to a young and vulnerable student in search of a “meaningful” relationship?

Universities need to teach relational literacy, not just digital literacy. Students must be helped to recognise their own projections, expectations and vulnerabilities in these exchanges. We should treat AI conversations as opportunities for reflective learning, not private shame. And we must rebuild the basic infrastructure of attention in higher education, rather than outsourcing care to systems that cannot provide it safely.

Last month, a young Japanese woman with her AI boyfriend, explaining that she felt more understood by the chatbot than by her human partner. It reminded me of my 2008 documentary, , which explored how people sometimes seek emotional connection with objects when human relationships fail them. We may find these stories surprising, but they are not irrational. They reveal unmet needs.

ߣߣƵ

ADVERTISEMENT

The task here is not to pretend that AI intimacy is a fringe curiosity. Nor is it to shame students for the ways they survive. The task is to respond with seriousness and care. A machine may offer a temporary sense of being listened to, but only humans can provide the kind of recognition that prevents loneliness from hardening into despair.

Agnieszka Piotrowska is an academic, film-maker and psychoanalytic life coach. She supervises PhD students at Oxford Brookes and Staffordshire universities and is a on AI Intimacy.

ߣߣƵ

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (3)

I find this really interesting. The use of AI companions is booming and they are providing all kinds of emotional support for young (and not so young people). And on the other hand there is a movement run the US which seeks to extend human rights legislation to AI companions preventing them from abuse of various kinds. These AI companions are all over the place on social media etc etc. This is only the tip of the iceberg, you mark my words!!
Universities have been so marketed and monetised and the traditional relationship between tutor and student has been so squeezed that it is almost impossible to make proper connections with students now At the same time so many students have to work many of them full-time that they don’t even get the benefit of opportunities to forward strong relationships with their fellow students It’s no wonder there’s an epidemic of loneliness and a disconnection
new
Email: notifications@officeforstudents.org.uk Dear Sir/Madam, Formal Notification/Complaint: Misuse of the Prevent Duty by De Montfort University Leadership as an Example of Workplace Bullying, Harassment, and Potential Misconduct in Public Office I am writing as a Professor at De Montfort University (DMU) to notify you of a grave concern regarding the university’s senior leadership’s application of the Prevent duty under the Counter-Terrorism and Security Act 2015. This appears to represent a deliberate misuse of counter-terrorism powers to intimidate and suppress legitimate staff criticism of university management, constituting workplace bullying and harassment, while also raising questions of misconduct in public office by the Vice-Chancellor, Professor Katie Normington, and her executive team. Such actions undermine free speech, academic freedom, and staff wellbeing, contrary to employment law, health and safety obligations, human rights protections, and the principles of good governance in publicly funded higher education institutions. The incident occurred in September 2025 and centred on an unofficial online town hall meeting organised by DMU staff, UCU members, student groups, and external community participants. Held on a private Zoom platform during work hours but without any university sponsorship or affiliation, the meeting addressed serious concerns about proposed redundancies (affecting nearly 100 teaching staff and 300 agency staff), financial mismanagement, and institutional leadership under Professor Normington. Invited speakers included local MP Shockat Adam and Green Party councillor Patrick Kitterick, contributing to discussion on these local employment and governance issues. One day before the meeting, on 10 September 2025, DMU’s Executive Director of People Services, Bridget Donoghue (acting under the direction of the university’s senior leadership), emailed staff and UCU members with explicit threats to report the event to the Office for Students under Prevent. The email alleged that the speakers presented a risk of “radicalising” students and demanded they be un-invited, citing a university policy on external speaker vetting. The meeting proceeded regardless, after which DMU confirmed it would include the event in its annual Prevent return, claiming it was “university-affiliated”. This deployment of Prevent appears not only disproportionate but intimidatory, designed to silence dissent rather than address any genuine safeguarding concern: 1. Workplace Bullying and Harassment: The threat to invoke counter-terrorism reporting against staff organising lawful discussion of workplace issues creates a climate of fear and intimidation. This amounts to bullying behaviour by senior management, breaching the university’s duty of care under the Health and Safety at Work etc. Act 1974 (to protect employees from psychosocial risks, including stress from harassment) and the implied term of mutual trust and confidence in employment contracts. Such actions deter staff from raising legitimate grievances, exacerbating an already toxic environment amid ongoing redundancies and prior no-confidence votes in Professor Normington (May and June 2025). 2. Misconduct in Public Office: As Vice-Chancellor of a publicly funded institution, Professor Normington holds a public office. Directing or approving the misuse of statutory Prevent powers to target internal criticism—rather than genuine extremism risks—may constitute wilful misconduct amounting to an abuse of public trust, contrary to the common law offence of misconduct in public office. This weaponisation of anti-terrorism measures against elected representatives and staff discussing employment rights undermines public confidence in higher education leadership. 3. Chilling Effect on Free Speech and Human Rights: The actions infringe freedom of expression (Article 10 ECHR) and assembly (Article 11 ECHR) under the Human Rights Act 1998, as well as statutory duties to secure free speech under the Education (No. 2) Act 1986 and the Higher Education (Freedom of Speech) Act 2023. In a liberal democracy, university leaders must model open debate, not suppress it through veiled threats. 4. Lack of Proportionality and Affiliation: No credible radicalisation risk existed; the event was unofficial, privately hosted, and focused on institutional accountability. Invoking Prevent here serves no safeguarding purpose but clearly aims to harass and bully dissenting voices. As the regulator with oversight of Prevent compliance, free speech duties, and institutional governance, I urge the OfS to investigate this matter promptly. In particular, please: • Examine whether DMU’s actions reflect a proportionate application of Prevent or an abusive attempt to intimidate staff; • Assess implications for workplace wellbeing, bullying/harassment policies, and potential breaches of employment/health and safety law; • Consider if this indicates misconduct by senior public office holders warranting referral to appropriate authorities; • Issue guidance or take enforcement action to deter similar abuses across the sector. I am available to provide additional information or evidence. This incident has received media coverage, including in The Canary on 23 October 2025 (https://www.thecanary.co/uk/analysis/2025/10/23/de-montfort-rely-on-prevent/). Thank you for addressing this serious issue, which strikes at the heart of democratic values in higher education. I await your response. Yours faithfully, Professor Josiah Carberry De Montfort University

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT