Editorial 

The Guardian view on online child protection: the web needs more health and safety

Editorial: New evidence of Meta’s failures to prioritise safety should be a wake-up call to policymakers
  
  

Child using a smartphone
‘The UK will have some of the strongest regulation in the world when the online harms bill comes into force. But that hasn’t happened yet.’ Photograph: Peter Byrne/PA

When the history of social media is written, there must be a chapter for the whistleblowers who have played a vital role in enabling public scrutiny. In 2021 Frances Haugen, who had worked at Facebook (now Meta), released research revealing Instagram’s harmful effects on children. Last year another former employee, Arturo Béjar, gave evidence to Congress about the sexual harassment his daughter faced on the platform. The engineer’s work features in a lawsuit against the company launched by the attorney general of New Mexico, Raúl Torrez.

This follows last year’s investigation by the Guardian into online sex trafficking. It found that Meta is failing to report or even detect the extent of the online pimping of children. The company’s own documents show that about 100,000 children, mainly girls, are sexually harassed each day on Facebook and Instagram. Experts warn that job cuts at Meta and other businesses, particularly in moderation and safety teams, will make these problems even harder to manage. Last week Mr Béjar told the Guardian that Meta has refused to learn from the death of the British teenager Molly Russell, whose decision to take her own life in 2017 was influenced by suicide and self‑harm content she had viewed.

Mr Torrez accuses Meta of enabling adults to message and groom children. Other evidence recently made public includes complaints from advertisers about potentially illegal content. Meta denies the lawsuit’s claim that it is a marketplace for child predators. But given the volume of evidence of online harms of various sorts – harms which coexist with the technology’s benefits – it is hard to deny that the regulatory framework over the past two decades has been disastrously lax. Key examples include the decision in 1996 that online platforms should be free of the obligations of publishers – something that President Joe Biden has suggested should change – and the choice of 13 as the default age for holding accounts. A separate issue in the US is that the law prevents AI-generated tips about abusive content from being investigated by police until they have been reviewed by the companies. As Beeban Kidron, the British campaigner for online safety, has often pointed out, this is an artificial environment that has not been engineered safely. AI-powered deepfakes are creating new risks.

Thanks in part to a civil society lobby including the 5Rights Foundation, the NSPCC and Ian Russell, Molly’s father, the UK will have some of the strongest regulation in the world when the online harms bill comes into force. But that hasn’t happened yet, and Labour believes the law needs tightening further. Meanwhile, the all‑party parliamentary group on commercial sexual exploitation has called for tougher age verification and the creation of a new offence of supplying pornography online to children – an area in which the Spanish government is also considering legislating.

As in many areas of human activity (such as roads and the use of drugs), there are trade-offs between freedom and safety. But as whistleblowers and other critics point out, the extent to which children’s wellbeing has been neglected by internet businesses is unacceptable – particularly as evidence grows of the links between online and offline sexual abuse. Mr Béjar should be listened to. The vast wealth of Meta, Twitter/X, Google and the other internet giants should increase and not lessen the responsibilities to society placed upon them.

 

Leave a Comment

Required fields are marked *

*

*