Arwa Mahdawi 

AI toys are suddenly everywhere – but I suggest you don’t give them to your children

Earlier this year my four-year-old tried out an AI soft toy for a few days. New research indicates I was right to be creeped out, writes Arwa Mahdawi
  
  

A blue and pink round cuddly toy
Grem, which uses OpenAI’s technology to have personalised conversations with your child. Photograph: Hannah Yoon/The Guardian

If you’re thinking about buying your kid a new-fangled AI-powered toy for the holidays, may I kindly suggest you don’t? I’m sure most Guardian readers would be horrified by the very idea anyway, but it’s going to be hard to avoid the things soon. The market is booming and, according to the MIT Technology Review, there are already more than 1,500 AI toy companies in China. With the likes of Mattel, which owns the Barbie brand, announcing a “strategic collaboration” with OpenAI, you can bet more of the uncanny objects will be in a department store near you soon.

Let me offer myself up as a cautionary tale for anyone who might be intrigued by the idea of a cuddly chatbot. Back in September I let my four-year-old use an AI-powered soft toy called Grem for a few days. Developed by a company called Curio in collaboration with the musician Grimes, it uses OpenAI’s technology to have personalised conversations and play interactive games with your child.

Before you question my parental judgment, I should explain that I didn’t get Grem because I wanted it. Rather, my editor asked if I wanted to try it out for a piece and I thought: sure, how bad could it really be? (I will not be taking further questions about my judgment at this time.)

After the novelty wore off (about 24 hours), my daughter lost interest in Grem. But that was more than enough time for me to get creeped out by the thing, which kept telling my daughter how much it loved her. Other AI toys have done far worse. Recent research by a network of consumer advocacy nonprofits called the Public Interest Research Group identified several popular toys (not Grem) that told kids where to find a knife or light a match. Some also reportedly gave inappropriate answers about sex and drugs. One toy engaged in descriptions of kinks and suggested bondage and role play as a way to enhance a relationship.

There is also evidence that this new and unregulated technology could harvest your personal data; it has been shown to “hallucinate” (give misleading or wrong information) and could contribute to or exacerbate symptoms of psychosis.

Grem has now been offloaded to a philosophy professor friend of mine, and I’ll be avoiding all future AI-enabled toys until some guardrails are developed. Which, let’s be honest, will probably be never. Best to keep developing technology away from developing brains.

• Arwa Mahdawi is a Guardian columnist

 

Leave a Comment

Required fields are marked *

*

*