How Facebook fights Facebook




People are skeptical about the role of technology companies in the fight against terrorism on the Internet.
Google + LinkedIn Facebook Twitter
By Monica Beckett, director of Global Policy Department and Brian Fishman, director of Facebook’s anti-terrorism policy

In the aftermath of recent terrorist attacks, people began to doubt the role of technology companies in the fight against terrorism on the Internet. We seek answers to these questions. Indeed, we agree with those who believe that social media should not be a platform for terrorists to communicate their messages and disseminate their ideas. We want to be very clear about the serious and decisive actions we are taking to address this issue because maintaining the security of our communities on Facebook is very important and a key priority in our mission.

In this blog we will highlight our actions and behind the scenes efforts, including how we use artificial intelligence techniques to get rid of the content that promotes terrorism and remove it from Facebook pages, something we have not talked about publicly before. We will also discuss the efforts of users in the fight against terrorism, some of whom have spent their entire careers in the fight against terrorism as well as ways of collaborating with partners outside our company in this area.

Our position is straightforward: there is no place for terrorism on Facebook. We get rid of terrorists and remove publications that support terrorism as soon as we know about them. When we receive reports of potential publications supporting terrorism, we review and thoroughly investigate those publications and partnerships. In rare cases, when we discover evidence of imminent damage, we immediately inform the authorities. Although academic research believes that the industry of extremism and the recruitment of members of groups such as Daqash and al-Qaeda is primarily on the ground and in the field, we know very well that the Internet plays an important role as well – we do not want to use Facebook in any terrorist activity whatsoever.

We believe that technology and Facebook can be part of the solution.

We have been careful, partly because we do not want to point out an easy technical way to deal with the issue. Keeping people safe on a platform that is used by nearly 2 billion people each month and sharing and publishing comments in more than 80 languages ​​in every corner of the globe is a huge challenge. There is a lot of work and complex procedures to be done. But we wanted to share with you what we are currently doing and hear your comments and observations so that we can do better.

Artificial intelligence

We seek to find extreme content immediately, before people in our community can see it. Most of the accounts we remove because of counterterrorism issues are found by ourselves. But we know that if we use technology, we’ll get better results – and we’re talking specifically about artificial intelligence – in stopping the spread of extreme content on Facebook. Although our use of artificial intelligence in the face of terrorism is still fairly recent, it is changing the ways in which we get rid of the potential promotion of terrorism and the removal of extremist accounts from Facebook. We are focusing our efforts on cutting-edge technologies to combat extremist content from the Islamic state (Al-Qaeda) and Al Qaeda and its affiliated organizations and groups, and we expect to expand our efforts to fight other terrorist organizations in a timely manner. We are constantly updating our technical solutions, but the following is a set of our current efforts.

Matching images

When someone tries to upload a picture or video of a terrorist, our systems analyze the image to make sure it matches a picture or video of a known terrorist. This means that if we have previously removed a promotional video of the state-sponsored organization, we can work to prevent other accounts from uploading the same video through our site. In many cases, this means that the terrorist content uploaded to Facebook simply will not reach the platform.

Understanding the language

We have also recently begun experimenting and testing ways to use artificial intelligence to understand texts that might invite or incite terrorism. We are now trying to experiment with the text analysis that we have already removed from our platform because of the praise or support of terrorist groups and organizations such as the State Organization and the Al Qaeda to develop text-based signals that such content may be extremist or terrorist propaganda. This analysis goes to an algorithm still in the early stages of learning how to discover similar publications. Automated learning algorithms are based on a loop of reactions and observations and thus evolve and work better over time.

Removal of terrorist groups

We know through studies that terrorists are usually mobilized and practiced in groups. This trend is currently reflected on the Internet as well. So when we can identify pages, groups, posts, or profiles that support terrorism, we also use certain algorithms that enable us to separate and identify related material that may also support terrorism. We use signals like those that tell us if an account is being amalgamated with a large number of accounts disabled for reasons of extremism and terrorism, or if the account shares the same attributes with another disabled account.

Counterfeit Accounts

We have also become much faster in detecting new phantom accounts that are frequently created by the same violators. Through this work, we have been able to significantly reduce the time of the existence of accounts on Facebook. But this work will not end here because it is like a state of ongoing conflict. Terrorists are constantly developing their methods as well. We are constantly working on identifying the new ways in which terrorist organizations try to circumvent our systems and then develop our tactics accordingly.

Collaboration across platforms

Because we will not allow terrorists to have a foothold in any of Facebook’s applications, we have begun developing systems to enable us to take action against terrorist accounts on all our platforms, including Wattsb and Instagram. Given the limited data collected in some applications, the ability to share data across all Facebook platforms is essential in our efforts to maintain the security of all our platforms.

Human experiences

Artificial intelligence can not reveal everything. Tracking and discovering people who support terrorism and extremist thought is not an easy or straightforward issue, and algorithms do not reach the level of human skill when it comes to understanding this kind of content. It may be the image of an armed man waving the flag of the state organization urging propaganda or an attempt to mobilize or recruit, but it may also be a picture in the news. Some of the more effective criticisms of these extremist groups, such as the state organization, may be based on criticism of the same materials and ideas promoted by these groups. To understand these complex cases, we need human expertise.

Reports and Reviews

Our community, Facebook users, provides assistance in this area by reporting accounts or content that may violate our policies – including the section on terrorism issues. The community operations teams around the world, which we are working to increase by about 3,000 people over the next year, are working 24 hours a day with dozens of languages ​​to review these reports and determine the quality of the content. This may be a very difficult effort, and we support these consultants with advice on the ground and training.

Specializing in issues of extremism and safety

Last year, we also developed our team of counter-terrorism specialists. There are more than 150 people on Facebook who specialize exclusively or mainly in the fight against terrorism as their primary responsibility. This group includes academic experts in the field of counter-terrorism, former prosecutors, legal officers, analysts and former engineers. The members of this expert group alone speak approximately 30 languages.

Real threats on the ground

We use artificial intelligence increasingly to identify and remove extreme content, but computers do not know what constitutes a real threat that requires escalation to warrant law enforcement. We also have a global team that responds within minutes to emergency law enforcement requests.

Partnership with others

Working to keep terrorism away from Facebook is not enough because terrorists are able to move from one platform to another. Hence the importance of partnerships with others such as other companies, civil society, researchers and governments.

Industry cooperation

In order to quickly identify and limit the spread of extremist content on the Internet, we joined Microsoft, Twitter and YouTube six months ago to announce a common database of digital hashes of images and videos – of content produced by or supported by terrorist organizations. This cooperation has already yielded significant and concrete results, and we hope that we will be able to add more partners in the future. We are grateful to our partners to help us maintain Facebook as a safe place.


Governments and intergovernmental agencies also play a key role in meeting, consulting and providing expertise that companies can not independently develop or acquire. We have learned a lot through the data provided to us by agencies in different countries about the mechanisms of promotion and propaganda of the state and the establishment of Al Qaeda. We have also participated and benefited from the efforts to support industry-wide cooperation by organizations such as the Internet Forum of the European Union, the Global Alliance against the DUP and the Home Office.


We know that terrorists sometimes use encrypted messages to communicate. The encryption technology has many legitimate uses of protecting our online banking services to maintain the security of our images. It is also important for journalists, NGO workers, human rights activists and others who want to keep their messages safe. Because of the way end-to-end encryption (E2EE) works, we can not read the contents of individual encrypted messages, but we provide the information we can provide in response to existing law enforcement requests in line with the law and in line with our policies.

Training in anti-radical discourse

We also believe that challenging and confronting extremist ideas on the Internet is an important part of responding to extremism and confronting it in the real world. Anti-radical discourse can take many forms, but it can be defined as a set of efforts that prevent people from taking a lifestyle that is full of hatred, violence and extremist thought, or persuading them to leave this pattern of living and to reject these ideas. But do not forget that counter-rhetoric pays off only when it is led by credible spokespersons. So we have partnered with non-governmental organizations and community groups to empower and support the most influential voices.

Partner Programs

We support many of the major programs dealing with anti-extremist dialogue. For example, last year we worked with the Institute for Strategic Dialogue to launch the Civil Courage Initiative on the Internet, a project involving more than 100 anti-hate and radical organizations throughout Europe. We also worked with Avine Labs to host technical competitions in places including Manila, Dhaka and Jakarta, where community leaders joined technology leaders to develop innovative solutions to combat and defeat extremism and hatred on the Internet. Finally, we supported a global outreach program, a student competition organized through P2P technology under the name Facebook Global Digital Challenge. In less than two years, the competition has reached more than 56 million people worldwide through more than 500 anti-hate and anti-extremism campaigns launched by more than 5,500 university students in 68 countries.

Our commitment

We want Facebook to be a hostile and non-terrorist place. The challenge for virtual communities on the Internet is the same as the challenge of societies in our real world – improving ways and speed to detect the first signs of terrorism before it is too late. We are absolutely committed to eliminating terrorism and extremism from our platform and will continue to share more about this continuing work as our efforts evolve in the future.


In: A Technology & Gadgets Asked By: [23202 Red Star Level]

Answer this Question

You must be Logged In to post an Answer.

Not a member yet? Sign Up Now »

Star Points Scale

Earn points for Asking and Answering Questions!

Grey Sta Levelr [1 - 25 Grey Star Level]
Green Star Level [26 - 50 Green Star Level]
Blue Star Level [51 - 500 Blue Star Level]
Orange Star Level [501 - 5000 Orange Star Level]
Red Star Level [5001 - 25000 Red Star Level]
Black Star Level [25001+ Black Star Level]