Alex Hern 

Far right groups’ coded language makes threats hard to spot

Sick jokes and deadly intentions can often be difficult to tell apart for those policing online culture
  
  

Large pile of floral tributes at a traffic light in Christchurch
Tribute to victims of the mosque attacks near a police line outside Masjid Al Noor in Christchurch, New Zealand on Saturday. Photograph: Jorge Silva/Reuters

The irony-laden vocabulary of the far-right online communities that spawned the terror attack in Christchurch on Friday makes it “extremely difficult” to distinguish a sick joke from a deadly serious threat, according to experts on the international far right and online information warfare.

References to “shitposting”, YouTube stars and the 17th-century Battle of Vienna are hallmarks of “that online culture where everything can be a joke and extremist content can be a parody and deadly serious all on the same page,” said Ben Nimmo, a researcher at the Atlantic Council. “Distinguishing between the two is extremely difficult. You have these communities who routinely practise extreme rhetoric as a joke, so it’s very easy to fit in if you’re a real extremist.”

That confusion can lead to observers underplaying the risk from such communities, rendering it harder to secure convictions for crimes such as hate speech, and even missing obvious red flags until it’s too late.

“People will be asking why people didn’t flag this up, but it all sounds like that,” Nimmo said. “The problem is that’s the way that community speaks. You can’t just point to the comments they’re saying and say that should be a warning light. There are plenty of people who post like that and are not going to pick up a weapon and start massacring people.”

It also leads to situations where mainstream observers unknowingly aid terrorists by spreading propaganda without recognising it for what it is.

Shortly before launching a terrorist attack that killed 49 Muslim worshippers in Christchurch on Friday, the alleged attacker posted to the political subforum of 8Chan, a far-right message board set up in 2013. Describing a forthcoming attack as “a real life effort post”, a link to a 74-page manifesto and a Facebook live stream was shared.

Both were initially shared by mainstream publications, with the Daily Mail embedding a copy of the manifesto and the Mirror sharing a lengthy edited version of the live stream.

“The way we always have to look at manifestos like this: it’s a PR document, a propaganda document that’s meant to be analysed, exposed, read and thought about,” said Patrik Hermansson, a researcher at Hope Not Hate. “The more confusing it is, the more it might be spread.”

Mentioning YouTube stars in video footage of attacks has the same goal. In Christchurch, the Facebook live stream opens with a shout-out to a popular video-gaming star, who has himself flirted with far-right iconography, although he has not condoned violence. “He’s one of the biggest YouTube accounts in the world, who has a lot of followers on his side. There’s a large potential audience there,” Hermansson said. “It’s also a way to force [the YouTube star] to acknowledge him and to get attention.”

Even when the action falls short of violence, the coded language popular among online communities such as 8Chan and Stormfront can pose problems for law enforcement. “It changes quickly, so it requires you to follow it quite closely,” notes Hermansson. For those who do, the lack of originality makes it easy for dedicated observers to cut through the irony.

“They don’t come up with these things themselves,” Hermansson says. General digital culture concepts such as “Copypasta” – large chunks of text cut-and-pasted to continue a running joke – are just as prevalent in the online far right as many other niche internet communities.

But, for outsiders, distinguishing the jokes from the serious statements remains hard. “What is hate speech? What can our justice system handle? They might not use the N-word, they might use super-coded language instead. Even parents might not understand that their own children are using this coded language. It’s difficult for everyone.”

And then there’s the simple desire to “troll” – say or do extreme things and revel in the reaction. “Outrage is exciting, and they feel like they have influence,” Hermansson says. “That is how they have influence.”

But Hermansson cautions that, even if it can be hard to spot a potential terrorist hiding in plain sight among a hundred ironic racists, it doesn’t necessarily represent a worse position to be in than the recent past.

“In Nazi groups, people sit down around a table and joke about things as well, and talk in terms of race war and blood baths.

“It’s definitely been made more extreme, and an even bigger problem, because more people express these views. That’s what the online world does, it lowers the barriers.

“But a person like this 20, 30 years ago wouldn’t say anything anywhere. Yet we had far-right terrorism then as well.

“Yes, now we have a bit more information, there’s a lot and it’s hard to figure out what’s important. But a few decades ago, we would have had none. They might have written a manifesto and sent it off to a newspaper – but it would arrive after their attacks.

“So now we have this issue [of] could we have stopped it? But, before, we definitely could not have.”

 

Leave a Comment

Required fields are marked *

*

*