Sarah Katz’s eyes dart around the Palo Alto, Calif., coffee shop.
“Am I OK to speak here?” she said, not wanting to offend anyone within earshot with what she was about to describe. “I don’t want to, like, bother people.”
Katz is a 27-year-old self-described former “spam analyst” who worked on contract with Facebook in 2016. She spent her days scanning flagged content, deciding whether posts met Facebook’s standards and should be kept as is on the platform or were so disturbing that they should be deleted.
“Primarily pornography, sometimes bestiality, child pornography,” she said, as she described the worst of the up to 8,000 posts she scanned every day.
Some stuck with her.
“There was a girl around 12 and a little boy, like nine, and they were standing facing each other and they didn’t have pants on. And there was someone off-camera who spoke another language,” she said.
“So he’s probably just telling them what to do. So that was disturbing.”
Katz was part of one of the fastest-growing, entry-level job sectors in Silicon Valley, that of content reviewer. Twitter, YouTube and Facebook are all fighting to rid their sites of ever-growing amounts of toxic content.
Facebook began as a site for university students, but has grown into the largest social media platform in the world. With that growth comes huge challenges, said James Mitchell, director of risk and response at Facebook headquarters in Menlo Park, Calif.
“One of the big changes we saw was how the content became substantially more global in nature, and we began seeing substantially more types of abuse on the platform and substantially greater volumes. And we really had to grow and scale our teams to be able to combat that,” he said.
“The world is changing around you, and the way people are using the product is changing,” he added.
“So that means you always have this evolving process of trying to figure out the best ways to keep the platform safe.”
Consider this gamut of troubling content:
- A United Nations report found Facebook “substantively contributed to the level of acrimony and dissension and conflict” during the Rohingya crisis in Myanmar.
- The immediate aftermath of Philando Castile’s shooting by a Minnesota police officer was broadcast on Facebook Live by his girlfriend.
- Student survivors of the Parkland shooting, such as David Hogg, were portrayed as “crisis actors” in fake posts.
- Alek Minassian, the suspect in the Toronto van attack in April that killed 10 pedestrians and injured 16, allegedly posted about an “Incel Rebellion” before the incident. Facebook later shut down his account.
- The Russian propaganda group Internet Research Agency was accused of using trolls on the platform to influence the U.S. election.
While artificial intelligence can tackle a lot of the posts that are created by fake accounts, humans are still key to making tricky ethical decisions.
Facebook had 4,500 people on the job last year and 7,500 work on it now. The company plans to increase the team responsible for safety and security to 20,000 this year — many of whom will be content reviewers.
Much of the work is contracted out to third-party partners, staffing up in places such as India and the Philippines.
Facebook reviewers work around the world and in various languages. The idea is to have people who are aware of various cultural differences and norms, and the Asia Pacific area is the largest region for new Facebook users.
A new documentary, called The Cleaners, shows the toll the work takes on the reviewers in a third-party company in Manila. One reviewer said he had watched “hundreds of beheadings.” Another said she’d go home thinking about pornography after seeing it so much at work.
It’s unclear what kind of support these outsourced workers get, though Facebook said all employees who are reviewing content get “wellness breaks,” training videos and psychological help.
“We try to ensure that everybody gets and has resources for psychological counselling,” said Mitchell. “We think about the wellness of people that are working on these issues.
“The reality is they know there is value that they’re adding for people on the site. They know they are preventing bad actions from happening to people. If one of the things you do is review live videos for suicide and self harm, you actually have the ability to potentially save a life.”
But Mitchell wouldn’t give details about how many staffers doing the work are hired by third-party partners. Nor would he talk about how many are based where.
“They’re hiding the debate,” said The Cleaners filmmaker Moritz Riesewieck at a recent Toronto screening.
“They’re hiding the dilemma they are facing in building these platforms, and not being responsible for what goes on these platforms.”
Sarah Roberts, a UCLA assistant professor who is writing a book on the topic, said this is the “unpleasant underbelly” of the social media platform.
“I mean, we are talking about billions of posts per day when it comes to Facebook. We’re talking about 400 hours of video content per minute, 24/7,” she said.
“So this amount is vast. But really, even 20,000 workers — I mean, how can they reasonably adjudicate a platform of billions of users?”
In the first quarter of 2018, Facebook pulled down 21 million pieces of adult nudity or pornography and 3.5 million incidents of graphic violence — the majority of which was flagged by artificial intelligence.
For hate speech, technology doesn’t quite do the trick: 2.5 million pieces of hate speech were pulled down in the same period — mostly by human reviewers.
“From the perspective of content reviewers, we have always played that policeman role, and so the dynamic nature of content that’s being shared on our platform will continue to create challenges for us,” said Mitchell.
“The other big wildcard is just the way the world continues to evolve. So much of what we do is dependent on what people are sharing, and that’s changing every few months.”