Priya Bharadia and Aisha Down 

Millions creating deepfake nudes on Telegram as AI tools drive global wave of digital abuse

Analysis finds at least 150 channels on messaging app that are distributing AI-generated images and video
  
  

A hand holds a phone displaying the Telegram logo in front of a sign also displaying the Telegram logo
In a number of instances, investigation showed that while one Telegram channel had been shut down, another with a near-identical name remained active. Photograph: Jaque Silva/SOPA Images/REX/Shutterstock

Millions of people around the world are creating and sharing deepfake nudes on the secure messaging app Telegram, a Guardian analysis has shown, as the spread of advanced AI tools industrialises the online abuse of women.

The Guardian has identified at least 150 Telegram channels – large encrypted group chats popular for their secure communication – that appear to have users in many countries, from the UK to Brazil, China to Nigeria, Russia to India. Some of them offer “nudified” photos or videos for a fee: users can upload a photo of any woman, and AI will produce a video of that woman performing sexual acts. Many more offer a feed of images – of celebrities, social media influencers and ordinary women – made nude or made to perform sexual acts by AI. Followers are also using the channels to share tips on available deepfake tools.

While there have long been Telegram channels dedicated to distributing non-consensual nude images of women, the widespread availability of AI tools means anyone can instantly become the subject of graphic sexual content viewable by millions.

On a Russian-language Telegram channel advertising deepfake “blogger leaks” and “celebrity leaks”, a post about an AI nudification Telegram bot promised “a neural network that doesn’t know the word ‘no’”.

“Choose positions, shapes and locations. Do everything with her that you can’t do in real life,” it said.

On a Chinese-language Telegram channel with nearly 25,000 subscribers, men shared videos of their “first loves” or their “girlfriend’s best friend”, made to strip using AI.

A web of Telegram channels targeted at Nigerian users disseminates deepfakes alongside hundreds of stolen nudes and intimate images.

Telegram is a secure messaging app that allows users to create groups or channels to broadcast content to unlimited contacts. Under the platform’s terms of service, users cannot post “illegal pornographic content” on “publicly viewable” channels and bots, or “engage in activities that are recognised as illegal in the majority of countries.”

A review of the independent analytics and database service Telemetr.io’s data, which has an index of such channels, indicates that Telegram has shut down a number of nudification channels.

Telegram told the Guardian that deepfake pornography and the tools to create it are explicitly forbidden by its terms of service, adding that “such content is routinely removed whenever discovered. Moderators empowered with custom AI tools proactively monitor public parts of the platform and accept reports in order to remove content that breaches our terms of service, including encouraging the creation of deepfake pornography.”

In its statement Telegram said it removed more than 952,000 pieces of offending material in 2025.

In recent weeks, the use of AI tools to create sexualised deepfakes and humiliate women has exploded into public discourse, after Grok, the generative AI chatbot on Elon Musk’s social media platform X, was asked to create thousands of images of women in bikinis or minimal clothing, without consent.

The resulting outrage led Musk’s artificial intelligence company, xAI, to announce it would stop allowing Grok to edit pictures of real people into bikinis. The UK’s media regulator, Ofcom, also announced an investigation into X.

But there is a reservoir of forums, websites and apps, including Telegram, that allow millions of people easy access to graphic, non-consensual content – and to generate and share this content on demand, without the knowledge of the women who are being violated by it. A report released on Tuesday by the Tech Transparency Project found that dozens of nudification apps are available in the Google Play Store and the Apple App store, and that collectively these have had 705m downloads.

An Apple spokesperson said the company had removed 28 of the 47 nudification apps identified by the Tech Transparency Project in its investigation, while a Google spokesperson said “most of the apps” on their service had been suspended, and that an investigation was ongoing.

Telegram channels are a mainstay of a broader internet ecosystem devoted to creating and disseminating non-consensual intimate images, said Anne Craanen, a researcher focused on gender-based violence at the London-based Institute for Strategic Dialogue.

They allow users to evade the controls of larger platforms such as Google, and to share tips on how to bypass safeguards that prevent AI models from generating this content. But the “dissemination and celebration of this material is another part”, she said. “That circulating it with other men and boasting about it, and that celebration aspect, is also really important. It really shows the misogynistic undertones of it. They’re trying to punish women or silence women.”

Last year, Meta shut down an Italian Facebook group in which men shared intimate images of their partners and unsuspecting women. Before it was removed the group, Mia Moglie (meaning “my wife”), had approximately 32,000 members.

However, the investigative newsletter Indicator found that Meta had failed to stop the flow of advertisements for AI nudification tools on its platforms, and identified at least 4,431 nudifier ads across its platforms since 4 December last year, though some appeared to be scams. A Meta spokesperson said it removes ads that violate its policies.

AI tools have intensified a global rise in online violence against women, allowing almost anyone to make and share abusive images. In many jurisdictions, including much of the global south, few legal routes exist to hold perpetrators accountable. Less than 40% of countries have laws protecting women and girls from cyber-harassment or cyberstalking, according to 2024 World Bank data. The UN estimates that 1.8 billion women and girls still lack legal protection from online harassment and other forms of technology-facilitated abuse

Lack of regulation is just one reason that women and girls in low-income countries are particularly vulnerable, say campaigners. Issues such as poor digital literacy and poverty can heighten risks. Ugochi Ihe, an associate at TechHer, a Nigeria-based organisation that encourages women and girls to learn and work with technology, says she has come across cases where women borrowing money from loan apps have fallen victim to blackmail from “unscrupulous men using AI. Every day it’s getting more creative with abuse”.

The real-life consequences of digital abuse are devastating, including mental health difficulties, isolation and loss of work.

“These things are bound to destroy a young girl’s life,” said Mercy Mutemi, a Kenya-based lawyer representing four victims of deepfake abuse. Some of her clients have been denied jobs and subjected to disciplinary hearings at school, she said, all because of deepfake images circulated without their consent.

Ihe said her organisation had handled complaints from women who were ostracised by their families after being threatened with nude and intimate images obtained from Telegram channels.

“Once it has gone out, there’s no reclaiming your dignity, your identity. Even if the perpetrator comes to say, ‘Oh, that was a deepfake,’ you cannot tell the amount of people that have seen it. The reputational damage is unrecoverable.”

 

Leave a Comment

Required fields are marked *

*

*