Charles Arthur, technology editor 

Taking down Isis material from Twitter or YouTube not as clear cut as it seems

Google et al happy to comply with law, but systems aren't set up to prevent those videos, hashtags and accounts getting online
  
  

Isis recruitment video
Taking videos and other material offline has become a game of 'whack-a-mole', no sooner have they been removed from one part of a site they pop up at others. Photograph: YouTube/PA Photograph: YouTube/PA

It sounds simple enough: Google's subsidiary YouTube should take down videos from extremist groups such as the Islamic State of Iraq and the Levant (Isis), and Twitter should block hashtags and accounts instigated by the same groups. In both cases such speech is an incitement to violence, and hence illegal under British law.

Having started as places for people to upload dating videos (YouTube) or let friends know what they are doing (Twitter), both networks have been thrown into the complex world of geopolitics, mixed with arguments over freedom of speech. The fact that Isis fighters and would-be jihadists are digital natives who have grown up with cameraphones and internet access means that social networks are the first, rather than the last, place they look to spread their message.

Google and Twitter are happy to comply with the law. Their problem, though, is that their systems are not set up to stop those videos, hashtags and accounts getting online – so taking them offline has become a game of "whack-a-mole", where no sooner have they been removed from one part of the site than they pop up at others. The latter is the outcome of the "Streisand effect", named after the US singer and actor Barbra Streisand who tried to suppress photographs of her beachfront home being posted on the internet. As a result, tonnes of websites posted the images.

Similarly, as fast as the Isis video and its ilk are taken down, they are put up again by newly created accounts.

Google says that worldwide, 100 hours of video is uploaded to YouTube every minute. It has a fast-track system for removing videos or even entire "channels" from YouTube, of which the Home Office is a member; but each request must be reviewed by a human.

Google says there is a round-the-clock reviewing team, and that it acts "quickly", but does not say whether that means days, hours or minutes. It also makes an exception for news or education: if a news organisation reposted the Isis video on YouTube, Google would leave it be. On Twitter, high-level groups can queue-jump to report accounts which extol violence or make credible threats. They can kill an account within minutes – but again, they need human examination.

Twitter, however, has to contend not only with accounts – which may only have a handful of followers, and are limited by its 140-character nature – but also hashtags, whose ebbs and flows are a sort of rolling zeitgeist of the networks. Twitter will not block tweets using a particular hashtag, but the Guardian understands that it is possible to prevent a hashtag from appearing in the "trending" list which many people refer to. Furthermore, because every link that appears in every tweet is shortened using the company's proprietary "t.co" link shortener – introduced initially to block tweets leading to spam and malware – it can also, in theory, block anyone following a link to an external site.

But as one source points out, the calls to block the use of social media are not always echoed by the intelligence agencies.

"When we get into these situations, there are lots of people who want us to take these accounts down or block them. But most of the time the government intelligence and military want us to keep them up, because that's how they track them."Yet Google does have a system that could prevent re-uploads of videos. Its content ID system is able to identify copyrighted audio in any video: it creates an audio fingerprint that can identify even small pieces of a song. Similarly, Amazon last week demonstrated its Firefly system for its Fire Phone smartphone that can identify copyrighted video – such as Game of Thrones – when seen on a screen. The technology exists to spot specific content.

Content ID is used to spot people uploading copyrighted songs, for example, and to ensure the copyright holder gets some repayment. Could content ID be used on the Isis video? Apparently so – but Google is reluctant to because of the "news/education" exceptions. However, that wouldn't prevent it from flagging such content and running it through pre-moderation.

So why doesn't Google do that? "We just don't," says a source. Even so, given the success that the record industry has had in getting content ID, the Home Office might find it a fruitful discussion.

But if you can't find Isis's would-be recruitment video on YouTube, you could find its 3'10" (preceded by a 30-second pre-roll advert) in the middle of a Daily Mail story – explaining that there was "outrage" that the film was still available on YouTube.

 

Leave a Comment

Required fields are marked *

*

*