We tend to think of censorship as the direct suppression of speech. We conjure images of mouths taped shut, courts seizing books and films, and journalists or activists thrown in jail to silence their voices. But what if, in a digital era governed by invisible yet highly consequential algorithms, censorship no longer revolved around the ability to speak, but rather around the visibility of content, its effective “reach”?
The launch of TikTok’s new US-specific algorithm underscores the urgency of this risk. This week, control over the platform’s operations has shifted to the TikTok USDS joint venture led by a consortium of investors that includes US big tech firms such as cloud-computing company Oracle, with the Chinese parent company ByteDance retaining a 19.9% stake. This arrangement is presented as a means of complying with US legislation introduced under former president Joe Biden, with the aim of protecting user data and preventing political interference from China. Yet many of TikTok’s 200 million US-based users now fear that Donald Trump and his allies may use algorithmic control to do precisely what China was accused of doing: interfering with political discussion by suppressing voices critical of Trump and his international allies.
Over recent days, US TikTok users have reported a number of suspicious malfunctions: videos covering controversial topics, such as the killing of Alex Pretti by a federal agent remaining under review; newly posted videos recording surprisingly low-view metrics; and allegations that it is impossible to post messages containing keywords such as “Epstein”. Drawing on these denunciations, California’s governor, Gavin Newsom, has called for a review into the TikTok algorithm to determine whether it complies with state law. Further, app store data shows that many users are cancelling the app and downloading alternatives.
While admitting disruptions, TikTok USDS has vociferously denied that such instances are politically motivated, blaming a power outage at an Oracle datacentre leading to cascading systems failures. Of course, it will take time to assess comprehensively the effects of these changes. The algorithm is proprietary, meaning that insights into its functioning can only be obtained through forms of “reverse engineering”, for instance, by comparing by observation which types of content perform better or worse. However, as things stand, users have very good reasons to be worried.
Trump has desired influence over the platform which, in recent years, has become a key conduit for political propaganda, and which Trump himself credited for helping his surprising 2024 election victory. Further, many of the investors have strong links with Trump and the global right. Oracle’s CEO, Larry Ellison, is known for his staunch support for Trump. With the president’s blessing, he has merged Skydance Media with Paramount and established controls over CBS News (potentially extending that influence in the near future also to Warner Bros and CNN).
The CEO of the new TikTok joint venture, Adam Presser, is on record as saying that references to Zionism should be considered an instance of hate speech. The Israeli prime minister, Benjamin Netanyahu, has made no mystery of his delight at the new arrangement, reflecting concern that videos shared on the platform have made western youth more sensitive about the suffering of Palestinians.
There are many ways in which the new algorithm will be able to influence the platform’s content visibility and hence its overall “political climate”. We may indeed witness changes in moderation, meaning that certain contents and accounts are effectively restricted. Award-winning Palestinian journalist Bisan Owda has said she has been permanently banned from the app as of Wednesday this week. However, it is likely that the most consequential changes will be more in terms of the way the algorithms serve content to users.
The new algorithm will be retrained on US rather than global data. This opens opportunities to introduce biases, with the potential of reinforcing conservative views and sidelining minority ones, while at the same time cutting US debates off from those going on in the rest of the world. Further, weights attributed to different parameters can have important consequences for user experience. As seen with Facebook’s 2018 adoption of the meaningful social interaction framework which down-ranked public and news content, while attributing a high weight to angry reactions, changes to the feed algorithm can have major consequences.
As scholars Kai Riemer and Sandra Peter have pointed out, the way in which algorithms “interfere with free speech on the audience side” highlights the need to reconsider the way we think about public debate in the algorithmic era. It’s not what we can or cannot say that matters; rather, it’s whether what we say can get any visibility at all, and whether it is able to move against the political climate imposed by those controlling platform algorithms.
While rightwing billionaires often proclaim to be free speech champions, the reality is that – as shown by Larry Ellison’s diversified media portfolio – they have established a pervasive grip over both traditional and online media, effectively curtailing the public’s freedom of speech in ways that are often invisible, hence all the more insidious. Unless we reclaim control over both our media and social media, we may soon find ourselves in a society dominated by a handful of information sources and pervasive algorithmic throttling – without even fully realising that what we are experiencing is censorship in a new guise.
Paolo Gerbaudo is a senior researcher at the faculty of political science and sociology of Complutense University in Madrid and the author of The Great Recoil