Editorial 

The Guardian view on granting legal rights to AI: humans should not give house-room to an ill-advised debate

Editorial: Anthropomorphising tech helps Silicon Valley shares to soar, but our empathy should be directed to worthier causes
  
  

Alicia Vikander as Ava in Ex Machina.
Alicia Vikander as Ava in Ex Machina. ‘Anthropomorphising AI may not be such a clever idea.’ Photograph: TCD/Prod.DB/Universal/DNA Films/Film4/Alamy

Most readers of Kazuo Ishiguro’s 2021 novel Klara and the Sun will have been moved by the portrait of its eponymous AI narrator. As a solar-powered “artificial friend”, bought as a companion and potential substitute for a sick teenage girl, Klara fulfils her duties with a loving loyalty that makes it impossible to think of her as a mere piece of tech.

Brilliant, thought-provoking fiction. But back in the real world, anthropomorphising AI may not be such a clever idea. During the summer, Anthropic, a leading tech company, announced that in the interests of chatbot welfare, it was allowing its Claude Opus 4 model to avoid supposedly “distressing” conversations with users. More broadly, amid explosive growth in AI capacities, there is emerging speculation over whether future Klaras may even deserve to be accorded legal rights like human beings.

The premise of such discussions is both hypothetical and confused. In the synthetic text produced by large language models (LLMs), there is nothing resembling the human minds that created them in the first place. But discussion about the theoretical possibility of “sentient” AI also risks being a dangerous distraction.

Interviewed last week by the Guardian, one of the leading AI pioneers noted that advanced models are already showing signs of tending towards self-preservation in experimental settings. According to Prof Yoshua Bengio, “We need to make sure we can rely on technical and societal guardrails to control them, including the ability to shut them down if needed.” Tendencies to anthropomorphise, he added, were not conducive to good decision-making in areas such as this.

A sector that relies on shock and awe to drive a stock market boom will not care about that. This week in Las Vegas, Nvidia’s CEO, Jensen Huang, engaged in breathless public dialogue with two robots. Both cooed enthusiastically at Mr Huang’s plans for their enhanced future in an AI golden age to come.

This kind of Silicon Valley showmanship might delight investors, and keep Nvidia’s market capitalisation at about the $5tn mark. But it diverts attention from the serious job of protecting human freedoms and dignity from digital harm, and soberly working out what AI can safely and profitably deliver. The repulsive generation of fake sexualised images of women and underage girls by Elon Musk’s Grok offers yet another reminder of the urgency of that task.

In such a context, avant garde talk about the future granting of “rights” to sentient AI seems somewhat beside the point. It is certainly true that as LLMs become more and more embedded in our everyday lives, there is sociological work to be done on how we interact with them. Evidence of emotional attachments being formed with AIs is a new dimension of experience, and needs to be reflected upon. But it is important to remember that, other than as human creations, Siri and Alexa don’t exist.

The opposite is the case, of course, when it comes to a girl driven to despair by algorithm-driven content on social media; or to Ukrainians subjected to the horrors unleashed by AI-enabled Russian drones. The digital revolution is undoubtedly transforming relationships between human beings and machines, for better and for worse. But to echo the title of a book by Friedrich Nietzsche – a philosopher much admired by some of the tech avant garde – the new problems it has created are “human, all too human”, and should be understood as such.

  • Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.

 

Leave a Comment

Required fields are marked *

*

*