This Nonprofit Could Help Fight Terrorism, but Critics Are Torn
The Counter Extremism Project is hoping to combat terrorism with technology that's already being used to block graphic content from social networks. But the strategy has some critics questioning whether the benefits are worth the cost to free speech.
Social networks, if nothing else, are a good testing ground for the limits of free speech.
Run by private companies, these services use algorithms to keep certain kinds of content from reaching users. Generally, these mechanisms block copyrighted material or graphic content, but thanks to the work of a nonprofit group, extremist propaganda could soon run into similar roadblocks on YouTube and Facebook.
The Counter Extremism Project (CEP), a nonpartisan group that says it was “formed to combat the growing threat from extremist ideology,” recently announced the launch of new tools to fight such content. The technology, based on previous work with Microsoft to fight online child exploitation, essentially gives extremist propaganda—whether distributed as an image, video, or audio—a unique signature so that the content can be removed with an algorithm rather than by hand.
CEP Senior Advisor Dr. Hany Farid, a Dartmouth College computer science professor and a prominent figure in digital forensics, argues that the approach could help stymie the rise of extremist ideology, such as that espoused by the Islamic State.
“If we seize this opportunity and have partners across the social media spectrum willing to fight the extremist threat by deploying this technology, extremists will find internet and social media platforms far less available for their recruiting, fundraising, propagandizing, and calls to violence,” Farid said in a news release. “It is no longer a matter of not having the technological ability to fight online extremism, it is a matter of the industry and private-sector partners having the will to take action.”
For example, CEP argues that YouTube videos created by cleric Anwar al-Awlaki were behind multiple terror attacks, including the 2013 Boston Marathon bombing and the 2015 San Bernardino attack.
IS it Censorship?
The CEP tool has not been deployed on any social networks, though Reuters reported last month that YouTube, Facebook, and Twitter have been looking at it. Meanwhile, CloudFlare, an online distribution platform that has also been considering the technology, suggested that if it were in use, the networks would be unlikely to discuss it.
“There’s no upside in these companies talking about it,” CloudFlare CEO Matthew Prince told the news service. “Why would they brag about censorship?”
In comments to Deustche Welle, CEP Chief Executive Mark Wallace, a former U.S. ambassador to the United Nations, noted that to prevent misuse, the organization would share the technology sparingly.
“We are going to license that technology in a very limited fashion so that it can only be used in the child pornography context and in the extremism context,” Wallace said. “We will not allow it to be used and have the code used by anyone who doesn’t agree to the terms of that license.”
Wallace said political officials have been more receptive to the technology. Recently, President Barack Obama noted that Orlando shooter Omar Mateen appeared to be “inspired by various extremist information that was disseminated over the internet.”
Social networks have a strong incentive to use the technology, Wallace said.
“Let’s be serious: I can’t imagine any social media company would want its platform to be used by [the Islamic State] or other groups as a platform for horrible, hateful, violent videos or audio or pictures in order to cause terrorist attacks,” he said.