As pressure mounts on terrorist groups such as Isis and al-Qaeda, the groups in turn are pushed to find alternatives as they seek to recruit more jihadists – most of whom are young people from all over the world – and they have found their heaven in internet and social media platforms.
In a report released in November 2017, New York University's Stern Centre for Business and Human Rights estimated that Isis generated 200,000 social media messages every day through the year.
Social media giants had to face this issue to protect millions of users, especially teenagers, who are more vulnerable and exposed to the dangerous material broadcast by these groups. What they came up with was the Global Internet Forum to Counter Terrorism (GIFCT).
The GIFCT is a coalition between the tech firms (Google’s YouTube, Facebook, Microsoft and Twitter), established in June 2017, which is an initiative with the aim of disrupting the terrorist exploitation of its services by creating a database of "digital fingerprints" of previously identified content as extremist material. They also provide other smaller tech companies, who may not have the resources to tackle the spread of harmful material on their platforms, with technical help to face this rampant danger.
Other services with access to the hashing database include Ask.fm, Cloudinary, Instagram, Justpaste.it, LinkedIn, Oath, and other companies who will remain unknown due to the fear of potential negative publicity, according to the BBC.
Facebook has been armed by a technique called "counterspeech," which is based on image matching and language analysis to identify terror content before being posted, according to Monika Bickert, Facebook's head of global policy management.
Facebook claimed in 2017 that 99% of all Isis and al-Qaeda-related content was removed before users get to flag it, while 83% of the remaining content was identified and removed within an hour.
However, Facebook, figuring out that technology might be not enough, beefed up its counterterrorism team to 200 persons instead of 150 in June 2017, and engineered specialized techniques to find and remove old content, according to Politico.
YouTube has a significant role in promoting terrorism; as it represents the terrorists’ platform to show off their triumph, by posting footage of bombings, stabbings and other propaganda videos about the welfare of their so-called “caliphate” to lure in more prey. Some of the Isis-related content on YouTube is very disturbing but nevertheless can be watched by children and minors, possibly affecting their memory for the rest of their lives.
Therefore, YouTube had to step up to prevent this material from circulating on screens all over the world. Google's Jigsaw research group developed what it calls "Redirect Method," which sends anti-terror messages to people likely to seek out extremist content. It works through analysing a person that seeks extremist content - based on their search history – after which it serves them with anti-Isis propaganda ads.
In 2017, YouTube's algorithms, in concert with human reviewers, were able to remove hateful content faster than before, according to testimony before Congress from Juniper Downs, YouTube's head of public policy.
Between June and December 2017, YouTube staff viewed nearly two million videos for violent extremism, flagging more than 98% of such material automatically, with more than 50% of the videos removed having fewer than 10 views, according to a BBC report.
Last April, Twitter announced that it had removed more than 270,000 terrorism-promoting accounts around the world in the second half of 2017, 75% of which were suspended before they sent their first tweet. Only 0.2% of the accounts were flagged upon police request.
According to the company’s own report, 93% were detected by tools developed by Twitter engineers, relying on US and EU lists of terrorist organisations as well as research from academics and experts to identify terrorists.
Throughout the two years 2015-2017, Twitter said that it had suspended more than 1.2 million accounts in its fight to stop the spread of extremist propaganda.
Academic and social research have focused on the impact of social media on the expansion of radical groups. One in particular, by terrorism expert Dr Solahudin from the University of Indonesia, revealed worrisome results, according to the South China Morning Post.
The professor concluded in his study that 85% of the convicted radicals in Indonesia took only one year to work up to their first attack after being exposed to Isis-related content on social media, based on his interviews with 75 convicted terrorists in Indonesia in 2017.
“Before the widespread use of social media, it would take between five and 10 years for a newly radicalised individual to take part in a terror attack,” Solahudin added.
Too little has been done
In another development, the European Commission's president has threatened social media giants with hefty fines if they failed to remove extremist content within an hour, BBC reported.
“An hour was a decisive time window," EU Commission President Jean-Claude Juncker said during his annual State of the Union address in September 2018.
G7 security ministers also raised their concerns during a Toronto meeting with social media companies in April 2018. They urged firms to improve automated methods of deleting suspect material, according to Reuters.
Three strikes Law
In a 2017 report in The Times, British ministers proposed a “three strikes” law for people caught streaming terrorist content in order to close a loophole that allows some people to watch gruesome or inflammatory propaganda without fear of prosecution as it is not an offence in the UK if one streams terror videos, only if they are downloaded.
On January 28th, The Washington Post shed light on a new method Isis uses to get around Internet censorship. Called “ZeroNet,” and created in 2015, it depends on building what is known as a “peer-to-peer” network. Unlike typical websites on the internet, sites on ZeroNet are decentralised - hosted by multiple servers. They can be created by installing the network’s software on an initial host computer. Then when visitors (or “seeders,” in peer-to-peer parlance) arrive that site, if they also have ZeroNet downloaded, they host the website on their own computers. These seeders can remain anonymous if they connect to these sites via the Tor network, which encrypts traffic and hides its source.