Skip to main content

Study: Facebook is skimping on moderation, and it’s harming the public

Getty Images/Digital Trends Graphic

A new report from the New York University Stern Center for Business and Human Rights alleges that Facebook and other social media companies (Twitter and YouTube are also mentioned specifically) are outsourcing too much of their moderation to third-party companies, resulting in a workforce of moderators who are treated as “second-class citizens,” doing psychologically damaging work without adequate counseling or care.

Most disturbingly, the report points out how a lax attitude toward moderation has led to “Other harms — in some cases, lethal in nature … as a result of Facebook’s failure to ensure adequate moderation for non-Western countries that are in varying degrees of turmoil. In these countries, the platform, and/or its affiliated messaging service WhatsApp, have become important means of communication and advocacy but also vehicles to incite hatred and in some instances, violence.”

Recommended Videos

The report makes a number of suggestions for how social media platforms can improve moderation, the most dramatic of which is bringing moderators on as full-time employees with salaries and appropriate benefits (including proper medical care).

In response to the study, a Facebook spokesperson said content moderators “make our platforms safer and we’re grateful for the important work that they do.”

“Our safety and security efforts are never finished, so we’re always working to do better and to provide more support for content reviewers around the world,” the spokesperson said. Facebook did not address the specific recommendations of the study.

YouTube said it has hired 10,000 people across the globe to moderate content, and said it was a critical part of their enforcement system. Moderators there are not allowed to review content for more than 5 hours per day and are offered training and wellness events like yoga classes and mindfulness sessions, according to YouTube.

A Twitter spokesperson told Digital Trends: “Twitter has made great strides to support teams engaged in content moderation, which is a pivotal part of our service. We continue to invest in a combination of our global teams and use of machine learning and automation so we can appropriately scale our work to support the public conversation.”

Digital Trends spoke to the author of the report, Paul M. Barrett, the deputy director at the NYU Center for Business and Human Rights, to discuss his findings and their implications for the future of social media. (This interview has been edited for clarity.)

Digital Trends: To start, how did you get involved in this project?

Paul Barrett: I decided to take on this project because we’ve looked at the issue of outsourcing in a number of industries, chiefly in the apparel industry, and what the consequences of that use of outsourcing are. And I thought it would be interesting to assess that in connection with the social media industry, where I think the use of outsourcing is less well understood. And I’m interested in content moderation, because to a greater degree than I think most people really imagine, content moderation is really one of the central corporate functions of a business like Facebook. And therefore, it makes it somewhat anomalous or curious that Facebook and its peers hold this activity at arm’s length and marginalize it.”

Is it well known that these companies rely mostly on outsourced moderators? Are they open about that or do they try to avoid revealing it?

I think they are somewhere between open and secretive about it. When Mark Zuckerberg has talked over the last couple of years about the vast expansion of human resources devoted to content moderation, he tends not to mention the fact that the overwhelming majority of these people are working for other companies, third-party vendors that Facebook contracts with.

So they don’t go out of their way to emphasize it. If you can get them to sit down and talk about it on the record they will, of course, concede that, yes, it is outsourced, but they really don’t want to get into the details. They don’t want to give specific numbers. And I think, generally speaking, I think it’s fair to say there’s a great deal of reluctance to talk about this.

And I think all of that is indicative of their discomfort with the fact that they’ve made this into a peripheral activity when they know that it’s actually central to keeping their business going.

Image used with permission by copyright holder

How much oversight does Facebook exercise over these moderation facilities? Do they stay mainly hands-off?

Well, when I asked that question, which is a good and natural question, I got two answers. One, the third-party vendors — or as they call them the partners — direct the activity on the production floor, as it’s called. So you need to go to them if you’re going to seek out details of exactly how things are run. “But we hold them to the highest standards and we have detailed contracts that have all kinds of requirements in them!”

In Facebook’s case, 2019, they are supposedly doing independent audits of this activity. When I asked for the results of the audits, they said they weren’t prepared to share them. So they play it both ways. It’s primarily the responsibility of the third-party contractors, but they hold them to the highest standards. I don’t know what to make of that dichotomy, beyond the fact that it would be more straightforward if they wanted to supervise this activity in a direct and simplified way, they would bring more of it or all of it in-house.

Did you get the sense that their decisions about moderation were more motivated by making money or just not understanding the importance of moderation until it became a big public issue for them?

I think cost savings has been a major driving force behind the move to outsource this activity in the first place. In Facebook’s case, that was back circa 2009 to 2010 as the company’s growth was really taking off and the amount of moderation they had to do was just completely overwhelming. They had small in-house teams working on it, and rather than making the bold decision of “We’ve got to keep control over this, we’ve got to make sure quality is maintained, so we’re going to make this a function that we really deal with in-house, the same way we do with our engineering and product teams and then our marketing teams and so forth … [they didn’t]”

But I think there’s another factor sitting alongside cost, which makes it a somewhat more complicated proposition. And that is a psychological factor, that content moderation is just not seen as being one of the sort of elite aspects of Silicon Valley business culture. It’s not engineering. It’s not marketing. It’s not the devising of popular, new products. It’s this very nitty-gritty, at times debilitating activity that’s not, by the way, any kind of direct profit center … it’s a cost, not a revenue generator.

And for all those reasons, I think the people who run these companies can’t really see themselves anywhere close to this activity and are more comfortable holding it at a distance. And that’s very hard to pin down, and if you lay it out like that, people will say, “Oh, no, we understand how important it is.”

But I stand by that assessment that there’s just a difference, a qualitative difference in the kind of activity that content moderation requires as opposed to what these companies are generally eager to be involved in.

The problem is that content moderation is different from cooking up the nice lunches that are on offer at Facebook, or from providing security or janitorial services, which are activities that you really can say are not part of Facebook’s core competency. And it’s understandable that they bring in companies that specialize in those services. Doing content moderation the right way is part of the core business of Facebook.

I’m curious if you have any thoughts about whether the government should be involved in forcing changes on Facebook. Both Trump and Joe Biden have harped on Section 230 lately and want to get rid of Facebook’s protections via the law.

Personally, I think that that approach is … in Biden’s case I would describe it as kind of a gimmicky response to understandable unease with the size and influence of these several social media giants. Now in Trump’s case, I think it’s something else. I think it’s very direct retaliation and an effort to sort of shut the companies down or harm them in a much more retaliatory sense. But I don’t think that getting rid of Section 230 and having social media companies be liable for everything that users put on the sites makes a lot of sense. And I think it would quickly snuff out some of the good aspects of Facebook, the way people can use it to communicate, to express themselves in a very ready fashion. I mean, if you got rid of Section 230, you’d end up with a much, much smaller site where communication moves much more slowly because the site would have to be checking in a preemptive way almost everything that went up on the site.

Trump Twitter
Getty Images/Digital Trends Graphic

You recommend that Facebook bring moderators in as full employees and double the amount of moderators. Is there anything philosophically that Facebook should consider when it comes to moderation now changing that, maybe their views on what constitutes violent content or what not?

You’re putting your finger on an important distinction, which is the high-level policy decisions: How do we define hate speech, or do we mark the president’s latest post when he’s talking about voting practices or making seemingly incendiary remarks about shooting protesters? Those high-level policy decisions are and will be made by the senior-most people at the companies. Meanwhile, on a parallel track, the routine day in, day out activity of content moderation continues. So I think it’s important to draw that distinction and see that you need to continue to debate the big, big questions of the day. Do we ever slow down or comment on, or — in extreme cases, maybe remove comments by the president of the country — on one side, and on the other side is a set of issues that are less philosophical and more just operational. How do we treat people who are doing this work? Are they employees or do we treat them as outsiders who we deal with only through an intermediary and so forth?

Which platforms do you think have taken the best approach to moderation so far?

It’s hard to say. Historically — and I think still today, very recent events notwithstanding — Twitter has taken the most laissez-faire approach to moderation and the subset of moderation that is fact-checking. YouTube historically has had big problems with conspiracy theories and activity around conspiracy theories like PizzaGate and Qanon and so forth. They seem to have gotten a bit of religion on those subjects and are being more aggressive about trying to take down some of those types of things. Facebook has more formalized procedures for these things, has a much more systematic fact-checking operation, even though it’s still very much inadequate to the task. So Facebook has done the most, but it’s not as if they’ve solved the problem.

Is there anything you’d like to close on?

I think an important aspect of all this is how the marginalization of content moderation has had a particular effect in the developing world and has contributed to the difficulties that Facebook, in particular, has had with its platform being misused and that misuse leading to real-world violence in countries like Myanmar, Sri Lanka, India, Indonesia and so forth.

I think it’s important to connect those things to the inadequate attention that the company has paid to those countries historically. It’s linked to the inadequate attention that they’ve paid to content moderation. And I think if it were a function that was part of Facebook proper and seen that way — and the stature and status of the activity and the individuals were raised — that those kinds of problems would be much less common. Now, they have made some progress in that regard. They have added content moderators in some of those countries, but I think they still have a long way to go.

Will Nicol
Former Digital Trends Contributor
Will Nicol is a Senior Writer at Digital Trends. He covers a variety of subjects, particularly emerging technologies, movies…
You can now use the Add Yours sticker on Reels for Facebook and Instagram
A series of three mobile screenshots on a gray background showing the new Add Yours sticker for Facebook Reels.

As of today, Facebook and IG creators have six new features they can use for their Reels content. But of the six, the most intriguing feature is support for a sticker prompt that was first used and popularized in Instagram Stories.

Meta announced via a Facebook video post that, in addition to all of its other new Reels-focused features, it would now offer support for its Add Yours sticker prompt in Reels for both Instagram and Facebook.

Read more
YouTube may finally loosen its rigid rules around copyrighted music
Youtube video on mobile. Credits: YouTube official.

YouTube video creators could one day have the option to use copyrighted music in their videos and still earn money on their videos.

Expanding Partnerships with the Music Industry, Subscribers from Posts, and Studio Mobile Navigation

Read more
Instagram is undoing its TikTok-like changes you hated so much
New features for Instagram Reels

Popular social media service Instagram is reconsidering its pivot to a TikTok-style video feed after recent changes proved to be highly unpopular with its fan base.

Over the past several weeks, Instagram has been testing a version of the app that opened into a feed of full-screen photos and videos, seemingly attempting to morph the service into something that more closely resembles TikTok. Similarly, the new feed also disproportionately pushes seemingly random "recommended" posts, squeezing out content from those folks that Instagram users have actually chosen to follow.

Read more