Targeted ads on Facebook allow businesses to reach specific audiences — but now the platform is apologizing after a no-profit organization demonstrated how the feature could be used to target users who call themselves “jew-hater” or other user-generated slurs. Facebook disabled the targeting option entirely last week after ProPublica shared a screenshot that showed a potential ad specifically targeting users with antisemitic views, and now the network is tightening advertising standards and adding more human oversight as a result.
“Hate has no place on Facebook – and as a Jew, as a mother, and as a human being, I know the damage that can come from hate,” Sheryl Sandberg, Facebook’s chief operating officer, wrote in a post on Wednesday, September 20. “The fact that hateful terms were even offered as options was totally inappropriate and a fail on our part. We removed them, and when that was not totally effective, we disabled that targeting section in our ad systems.”
Targeted advertising allows businesses to choose who their audience is — for example, a wedding photographer can choose to only show their ad to newly engaged couples living within their region. The problem is that the data comes from Facebook bios — and users can put whatever they want into those fields.
In the example shared by ProPublica, apparently, some 2,274 people listed “Jew hater” as their field of study in the education section. Since enough users manually typed in that option into the field, the category pops up when searching for specific demographics for ad targeting.
ProPublica said the audience size for just the single term was too small for Facebook to allow an ad, but they found several other categories by searching “Jew h” including “how to burn Jews.” According to the organization, the auto-suggest even suggested the category for “Hitler did nothing wrong” after typing in “Hitler.”
Facebook says that targeting an ad based on race, ethnicity, and origin — as well as factors like religion, sexual orientation, and disabilities — has always been against its advertising policies, but the platform is now clarifying those guidelines and enhancing how violations are enforced. More human oversight will also be added to the process, in addition to the artificially intelligent software Facebook tested earlier this year.
The social network says it has already reinstated 5,000 targeting terms that meet community standards but will continue to conduct a manual review of whatever new targeting options result from users typing in their own phrases. Finally, Facebook says it is creating a program that allows users to report ad abuse.
This isn’t the first time a study by ProPublica, an organization that aims “to expose abuses of power and betrayals of the public trust by the government, business, and other institutions,” has resulted in policy changes. A year ago, the same organization was able to buy an ad that violated laws against housing discrimination by blocking specific ethnicities from seeing the ad. Facebook later removed the ability to exclude ethnicities in housing, credit, and employment ads.