Microsoft, Google, Facebook and Twitter will cooperate to detect and remove extremist content on their platforms via an information-sharing initiative. They agreed to create a shared database of unique digital fingerprints for images and videos promoting terrorism. After one company has detected and deleted such content, the others will be able to use the created “hash” to do the same on their own networks.


The precise technical details of tackling terrorist content remain to be developed, but the approach reminds that adopted to tackle child sexual abuse imagery. Microsoft, Google, Facebook and Twitter use the National Center for Missing and Exploited Children’s PhotoDNA technology to identify images of child sexual abuse. These images are categorized centrally by law enforcement, and the tech firms are legally obliged to remove content. A sister program for extremist content was proposed earlier in 2016, able to proactively flag extremist photos, videos and audio clips as they are appeared online.

The main problem is that there is no impartial body to monitor the database, while complete transparency is required over how content makes it into the hashing database, and someone has to make sure it’s up to date. Otherwise, only the tech firms themselves would do that.

The tech giants have been under pressure from governments all over the world over the spread of extremist propaganda on the Internet from terror networks like Isis. Earlier in 2016, top White House officials met with tech giants’representatives to explore ways to tackle terrorism, but it seems that the latest initiative was not the result of that meeting. In any case, all the companies agreed there was no place for content that promotes or supports terrorism on their networks.