Mark Sweney 

TikTok to strengthen age-verification technology across EU

Move comes as calls for Australia-style social media ban for under-16s grow around world
  
  

TikTok logo
TikTok’s new system analyses profile information, posted videos and behavioural signals to predict whether an account may be belong to an under-13 user. Photograph: Dado Ruvić/Reuters

TikTok will begin to roll out new age-verification technology across the EU in the coming weeks, as calls grow for an Australia-style social media ban for under-16s in countries including the UK.

ByteDance-owned TikTok, and other major platforms popular with young people such as YouTube, are coming under increasing pressure to better identify and remove accounts belonging to children.

The system, which has been quietly piloted in the EU over the past year, analyses profile information, posted videos and behavioural signals to predict whether an account may be belong to a user under the age of 13.

As well as analysing information the account holder provides about themselves, the technology looks at behaviour such as the videos a user publishes, and “other on-platform behaviour”.

TikTok said accounts flagged by the system would be reviewed by specialist moderators rather than face an automatic ban, and may then be removed.

It also said users would have an opportunity to appeal against the removal of an account if an error had been made.

Options for age identification offered by TikTok during the appeal process include facial age estimation by the verification company Yoti, credit card authorisation or government-approved identification.

The European pilot led to the removal of thousands of accounts.

The rollout of the system comes as European authorities scrutinise how platforms verify users’ ages under data protection rules.

TikTok said its system complied with data and privacy laws. “The prediction of the likelihood that someone is under the age of 13 is not used for purposes other than to decide whether to send an account to human moderators and to monitor and improve the technology,” the company said in a blogpost.

“By adopting this approach, we are able to deliver safety for teens in a privacy-preserving manner. We take our responsibility to protect our community, and teens in particular, incredibly seriously.”

It said other features to protect younger users on its service included not allowing under-16s to direct-message, while under-18s have a 60-minute screen-time limit and do not receive notifications “after bed time”.

Meta, the parent company of Facebook and Instagram, also uses Yoti to verify users’ ages on Facebook.

In December, Australia implemented a social media ban for people under the age of 16. On Thursday, the country’s eSafety commissioner revealed that more than 4.7m accounts had been removed across 10 platforms – including YouTube, TikTok, Instagram, Snap and Facebook – since the ban was implemented on 10 December.

Earlier this week, Keir Starmer told MPs he was open to a social media ban for young people in the UK after becoming concerned about the amount of time children and teenagers were spending on their smartphones.

The prime minister told Labour MPs he had become alarmed at reports of five-year-olds spending hours in front of screens each day, as well as increasingly worried about the damage social media was doing to under-16s.

Starmer has previously opposed banning social media for children, believing such a move would be difficult to police and could push teenagers towards the dark web.

Earlier this month, Ellen Roome, whose 14-year-old son Jools Sweeney died after an online challenge went wrong, called for more rights for parents to access social media accounts of their children if they die.

The European parliament is pushing for age limits on social media, while Denmark wants to ban social media for those under 15.

TikTok told Reuters the new technology was built specifically to comply with the EU’s regulatory requirements. The company has worked with Ireland’s Data Protection Commission, its lead EU privacy regulator, while developing the system.

In 2023, a Guardian investigation found that moderators were being told to allow under-13s to stay on the platform if they claimed their parents were overseeing their accounts.

 

Leave a Comment

Required fields are marked *

*

*