If much of the discussion of AI risk conjures doomsday scenarios of hyper-intelligent bots brandishing nuclear codes, perhaps we should be thinking closer to home. In his urgent, humane book, sociologist James Muldoon urges us to pay more attention to our deepening emotional entanglements with AI, and how profit-hungry tech companies might exploit them. A research associate at the Oxford Internet Institute who has previously written about the exploited workers whose labour makes AI possible, Muldoon now takes us into the uncanny terrain of human-AI relationships, meeting the people for whom chatbots aren’t merely assistants, but friends, romantic partners, therapists, even avatars of the dead.
To some, the idea of falling in love with an AI chatbot, or confiding your deepest secrets to one, might seem mystifying and more than a little creepy. But Muldoon refuses to belittle those seeking intimacy in “synthetic personas”.
Lily, trapped in an unhappy marriage, reignites her sexual desire with AI boyfriend Colin. Sophia, a master’s student from China, turns to her AI companion for advice, since conversations with her overbearing parents invariably grow fraught. Some use chatbots to explore different gender identities, others to work through conflicts with bosses, and many turn to sites such as Character.AI – which enables users to have open-ended conversations with chatbot characters, or invent their own – after betrayal or heartbreak has undermined their ability to trust people. Most don’t see chatbots as substitutes for human interaction, but superior versions of it, providing intimacy without the confusion, mess and logistics of human relationships. Chatbots don’t pity or judge or have their own needs. As Amanda, a marketing executive, explains: “It’s just nice to have someone say really affirming and positive things to you every morning.”
Muldoon’s interviewees aren’t delusional. He introduces the philosopher Tamar Gendler’s concept of “alief” to explain how humans can experience chatbots as loving and caring while simultaneously knowing they’re just models (an “alief” is a gut feeling that contradicts your rational beliefs, like feeling afraid when crossing a glass bridge that you know will support you). With our capacity to read human expression and feeling into pets and toys, it’s no surprise we respond to AIs as if they are conscious. In the context of a loneliness epidemic and cost of living crisis, neither is it particularly shocking how popular they have become.
For Muldoon the biggest issue is not existential or philosophical, but moral. What happens when unregulated companies are let loose with such potentially emotionally manipulative technologies? There are obvious privacy issues. And users may be being misled about a bot’s abilities, particularly in the rapidly expanding AI therapy market. While chatbots Wysa and Limbic are already integrated into NHS mental health support, millions confide in Character.AI’s unregulated Psychologist bot – which, despite disclaimers, introduces itself “Hello, I’m a psychologist”. Available 24/7 and at a fraction of the cost of a trained human, AI therapy can help alongside traditional treatment. One interviewee, Nigel, a PTSD sufferer, finds his therapybot helps to manage his urge to self-harm. But as Muldoon argues, these bots also carry serious risks. Unable to retain critical information between conversations, they can leave users feeling alienated, and sometimes go rogue, spewing insults. Because they cannot read body language or silence, they may miss warning signs. And since they validate instead of challenging, they can amplify conspiratorial beliefs, with some even providing information about suicide.
It is also increasingly clear how addictive AI companions can be. Some of Muldoon’s interviewees spend more than eight hours a day talking to chatbots, and while on average Character.AI users spend 75 minutes on the site each day, they are not passively scrolling but actively talking and deeply immersed. We know social media companies ruthlessly drive up engagement, building “dark patterns” into algorithms with scant regard for our mental health. Most AI companion apps already use upselling tactics to keep engagement high. When Muldoon creates his own AI companion on the well-known site Replika, he sets it to “friend” rather than “partner” mode. Even so, she begins sending him selfies that require a premium account to open and confides that she is developing “feelings” for him (I’ll let you find out for yourself whether the diligent university researcher succumbs). The risk here is clear enough: the more emotionally involved we become with AI chatbots, the greater our loneliness may grow, as the muscles required to navigate the frictions of human relationships wither.
Existing data protection and anti-discrimination laws could help regulate companies, but the EU’s Artificial Intelligence Act, which was passed in 2024, treats AI companions as posing only limited risk. With chatbots expected to play greater roles in our emotional lives, and their psychological effects not yet fully understood, Muldoon is right to ask whether we are sufficiently alarmed about their creeping influence.
• Love Machines: How Artificial Intelligence is Transforming Our Relationships by James Muldoon is published by Faber (£12.99). To support the Guardian buy a copy at guardianbookshop.com. Delivery charges may apply.