Kari Paul 

Reversal of content policies at Alphabet, Meta and X threaten democracy, warn experts

Media watchdog says layoffs at top social media firms affecting moderation create ‘toxic environment’ as 2024 elections approach
  
  

A close-up of a phone showing X, Facebook, TikTok, Telegram, Instagram, YouTube and Messenger apps in Brussels, Belgium, on 30 November 2023.
A close-up of X, Facebook, TikTok, Telegram, Instagram, YouTube and Messenger apps. Photograph: Jonathan Raa/NurPhoto/Shutterstock

As the 2024 elections approach, experts warn that top social media firms have rolled back vital safety policies and laid off moderation staff, creating a “a toxic online environment” vulnerable to exploitation that threatens democracy.

A new study from the non-profit media watchdog Free Press documented 17 major platform policies affecting online content integrity that have been rolled back in the past year at Alphabet, Meta and Twitter/X. It also cited more than 40,000 layoffs at these companies as a threat to the health and safety of their platforms.

“The deluge of fake, hateful and violent content on social media platforms is about to go from very bad to worse in 2024 – in ways that will throw our democracy and lives into further chaos,” said Nora Benavidez, senior counsel at Free Press and author of the report. “Big tech executives like Elon Musk and Mark Zuckerberg have made reckless decisions to maximize their profits and minimize their accountability.”

Meta and Alphabet did not immediately respond to request for comment. An email to X received an automated response.

The study cited a series of “startling” reversals of policies meant to safeguard elections, many of which were instated after the January 6 Capitol riots that followed the presidential election in 2020.

Among them, that YouTube in June reversed its election integrity policy to reallow content contesting the validity of the 2020 elections to remain on the platform. X did the same in 2022. Meta, X and YouTube reinstated Donald Trump’s accounts despite his outsized role in supporting and fueling the January 6 insurrection online. Meta has failed to enforce its own policies requiring the labeling of political advertisements and X in August announced, in a policy reversal, it would now allow political advertisements.

“Allowing these advertisements, given the horrifying state of content moderation and the lack of workers to review content, makes the platforms ripe for abuse and vulnerable to bad actors who will push the boundaries of what’s acceptable,” Benavidez said.

The report also questions how the huge number of layoffs in 2023 will affect the spread of hate speech and misinformation online. Meta laid off more than 20,000 people in 2023, including a team that had built a fact-checking tool to detect and eliminate misinformation. It also reportedly slashed approximately 200 content moderators from its ranks in early January and eliminated more than 100 positions related to trust, integrity and responsibility later in the year.

The “greatest failure” of the platforms examined in the past year, Benavidez said, has been Twitter – which Elon Musk rebranded as X after gutting its staff. The company laid off 7,000 people after Musk took over the platform, or 82% of its staff. Those layoffs included the entire trust and safety team as well as contractors in charge of content moderation – which experts said quickly led to a spike in misinformation and hate speech.

“Twitter’s erosion has been both rapid and really pervasive across all teams, user experience and back-end functions,” Benavidez said.

Alphabet, the YouTube parent company, laid off approximately 12,600 people in 2023 – but with “almost no transparency” about which teams were affected by layoffs, the report said, it is unclear how moderation will be impacted. Notably, the study said, TikTok was the only major platform that did not roll back any meaningful election integrity policies in the last year.

The Free Press study has called on social media firms to reinstate policies combating misinformation and issued a set of recommendations to be implemented by February of 2024. It noted that such policies are often announced just months or weeks before elections – often when it is too late.

Those recommendations include that social media firms reinvest in trust and safety teams, reinstate eliminated policies, and develop more transparency and disclosure policies. The report also demanded politicians codify reforms that minimize the data companies collect and retain and ban algorithmic discrimination.

“The seeds of lies and of disenfranchisement are planted long before these interventions from platforms get implemented,” Benavidez said. “We are running out of time.”

 

Leave a Comment

Required fields are marked *

*

*