There is a fine line between free speech and hate speech — so what happens to hate speech that is posted online? In a blog post on Tuesday, Facebook shared how it defines and enforces guidelines to keep hate speech off its social media platform.
Globally, Facebook deletes around 288,000 posts a month after the posts are reported to include hate speech. As part of the series Hard Questions, Facebook is sharing how it handles hate speech and to do that, it has to define what, exactly, hate speech is.
Facebook says there is a difference between disagreeing on politics and religion, and hate speech. “Our current definition of hate speech is anything that directly attacks people based on what are known as their ‘protected characteristics’ — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease,” Richard Allan, Facebook’s vice president of Europe, the Middle East and Africa Public Policy, said.
The social media platform says flagging and removing hate speech is often difficult because the line between it and free speech often varies between cultures and nationalities. For example, some posts that are protected speech in the U.S. could result in a police raid in Germany.
When a post is flagged, Facebook considers the post’s context, including how identical words can have a different meaning in different regions of the world. Intent is also a consideration — for example, several offensive terms that cause posts to be flagged do not result in removal when the user is referring to themselves.
Facebook says it is working on an artificial intelligence solution but it is still a ways away in the future. Community reporting remains one of the biggest ways the platform identifies hateful content. The company will be adding 3,000 people to that team before the end of the year, and Facebook’s discussion comes on the heels of the announcement of a partnership with Microsoft, Twitter, and YouTube to fight terrorism online. The inside look at the process of flagging hate speech comes after the company’s guidelines for removing content were allegedly leaked last year.
“If we fail to remove content that you report because you think it is hate speech, it feels like we’re not living up to the values in our Community Standards. When we remove something you posted and believe is a reasonable political view, it can feel like censorship,” Allan said. “We know how strongly people feel when we make such mistakes, and we’re constantly working to improve our processes and explain things more fully.”
The in-depth look at the policy is part of Facebook’s Hard Questions series that asks users to share input and suggestions for improvement through hardquestions@fb.com.