A month after the horrific attack in Christchurch, which was live-streamed on Facebook, New Zealand PM Jacinda Ardern said: “It’s critical that technology platforms like Facebook are not perverted as a tool for terrorism, and instead become part of a global solution to countering extremism.”
We wholeheartedly agree. Neo-Nazi and other far-right material, alongside Islamist and far-left content, spread swiftly on Facebook, with a potential to reach thousands in a matter of hours. Facebook is not alone; Social media platforms have been used by extremists to radicalise and inspire acts of terrorism across the world. Exposure to online extremism is not the sole cause of radicalisation, but in combination with other risk factors, it can weaponise a latent disposition towards terrorist violence.
Preventing online extremism has become a priority for policy-makers in Europe. In the U.K., the Home Office and DCMS have proposed to regulate internet platforms in the Online Harms White Paper, which considers a wide range of harms, including extremism and terrorism.
We offer several recommendations. First, a clear definition of extremist content can prevent uncertainty and over-blocking, and help ensure content is judged consistently by human moderators. Once human moderators have determined something is extremist content, platforms should use hashing technology to screen out known extremist content at the point of upload. One example of such technology is the Counter Extremism Project’s eGlyph – a tool developed by Hany Farid, a Professor of Computer Science at the University of California, Berkeley and member of the Counter Extremism Project’s advisory board.
eGlyph is based on ‘robust hashing’ technology, capable of swiftly comparing uploaded content to a database of known extremist images, videos, and audio files, thereby disrupting the spread of such content. We have made this ground-breaking technology available at no cost to organisations wishing to combat online violent extremism.