“Terrorism has no place on Facebook,” the social media company’s lead policy maker for counterterrorism, Brian Fishman, told audience members at the International Institute for Counter-Terrorism’s 17th annual conference on Monday.
The conference, which took place at the Interdisciplinary College of Herzliya, is one of the largest events in the field today, and boasts of over 1,000 top decision-makers, defense, intelligence and police officials from over 60 countries among its participants.
Fishman, who is also a researcher on counterterrorism at the New America Foundation and a fellow at the Combating Terrorism Center at West Point, spoke about the challenges Facebook faces in quickly scanning and removing terror-related content from its site, and what the company is doing to improve this process. Facebook employs 150 people – including lawyers and policy experts – whose primary responsibility is to address and remove terror-related content on the site.
Before posts gets to them, they’re filtered through Facebook’s computers, which have been programmed to scan content for buzzwords associated with terrorism. Only then are the posts passed onto the team, a strategy Fishman described as “using humans to do what humans do best, and computers for what computers do best.”
Years ago, social media companies began having informal conversations about how to address terror-related content on their sites. In recent years, these informal conversations have transformed. Now companies share “hashes” or digital fingerprints of files, to alert other platforms of troublesome users or content in a large collective called the Shared Industry Hash Database. The system helps ensure that the sites remain in the loop with one another.
In addition, Facebook announced last month the first meeting of the Global Internet Forum for Counter-Terrorism, which brings together Facebook, Microsoft, Twitter and YouTube in an initiative to collectively combat terrorism online.
The system, along with the hash database, helps prevent what Fishman called “recidivists,” or users whose content is flagged and removed, but who pop up again with a fake account. “None of this is perfect,” Fishman added. “But we’re trying to get better.”
While the computers are able to quickly scan posts, the system still relies on users to report content that appears offensive or illegal. Two years ago, 20,000 Israelis collectively sued Facebook for what they claimed was the social media company’s leniency in allowing incitement on its platform, and for not removing terrorism-related comments. The lawsuit came amidst the “stabbing intifada” in which social media was a commonly used tool to incite Palestinians to commit acts of terrorism.
The Institute for Counter Terrorism is part of the IDC in Herzliya.
Yonah Jeremy Bob contributed to this report.