Facebook, YouTube, Twitter send home thousands of human moderators

- ADVERTISEMENT -
Facebook, Google and Twitter logos are seen in this combination photo from Reuters files. REUTERS/File Photos

SAN FRANCISCO – Early this month, most Facebook employees packed up and readied to work from home as the novel coronavirus spread around the world. Despite a company-wide mandate, however, the social networking giant had not figured out how to conduct its most sensitive work remotely: removing pornography, terrorism, hate speech and other unwanted content from across its site.

The people who do that sensitive work – nearly 15,000 contractors at 20 sites globally – continued to come to the office until March 16, when public pressure, internal protests and quarantine measures around the world pushed Facebook to make a drastic move to shutter its moderation offices.

But Facebook’s decision to place that army of moderators on paid leave paves the way for another challenge, forcing the company to police disinformation, medical hoaxes, Russian trolls and the general ugliness of the Internet without them.

While Facebook, YouTube, Twitter and other companies have long touted artificial intelligence and algorithms as the future of policing problematic content, they’ve more recently acknowledged that humans are the most important line of defense. Those contractors, who are paid a fraction of what full-time workers earn, spend hours a day reviewing material flagged as illegal or disturbing, removing posts that cross the line and often suffering psychological harm from the exposure.

Still, Facebook chief executive Mark Zuckerberg said on a media call Wednesday that the company will be forced during the pandemic to rely more heavily on artificial intelligence software to make those judgment calls. The company also will train full-time employees to devote “extra attention” to highly sensitive content, such as any involving suicide, child exploitation and terrorism. Users should expect more mistakes while Facebook triages the process, he said, in part because a fraction of the humans will be involved and because software makes more-blunt decisions than humans.

Zuckerberg acknowledged that the decision could result in “false positives,” including removal of content that should not be taken down.

It will “create a trade-off against some other types of content that may not have as imminent physical risks for people.” Still, he hopes to train more people as quickly as possible because he was “personally quite worried that the isolation from people being at home could potentially lead to more depression or mental health issues, and we want to make sure that we are ahead of that in supporting our community.”

Zuckerberg’s admission reflects the complex choices and trade-offs Silicon Valley giants are making in the face of a mounting global health crisis. The companies can protect workers and comply with local stay-home orders. But that choice could jeopardize the safety of the billions of users around the world, many of whom are quarantined at home, on the Internet all day and exposed to more potentially disturbing material than before.

YouTube also announced temporary plans last week to rely more heavily on automated systems to reduce the number of people in the office, something the company warned could prompt a slower appeals process for video content creators and more unreviewed content barred from search or its homepage. The same day, Twitter said it would do the same. Because the automated systems may cause mistakes, it won’t permanently suspend accounts during this period. It’s also triaging to prioritize policing potentially more-harmful violations.

Facebook, YouTube, Twitter and other social media companies have faced significant challenges to policing content, from the live video posted during the Christ Church, New Zealand, shootings last year to disinformation campaigns by Russian trolls during the 2016 presidential election. The decision to send workers home comes during a presidential election year when foreign and domestic users are actively trying to shape public debate using disinformation that might be spotted only by a human eye.

That pressure is heightened as disinformation regarding the novel coronavirus surges. On Facebook-owned WhatsApp, chat groups are spreading unverified information about flights, hotels and schools in connection with the virus, as well as misinformation about potential government crackdowns and how the disease is spreading. On Facebook, a fake letter circulated about an outbreak in Los Angeles, and there were pervasive posts about fake cures and falsehoods that the U.S. government created the coronavirus.

It’s not just social media: Some consumers are getting fake texts to their phones warning of a nationwide lockdown.

After the 2016 presidential election, Facebook hired thousands of third-party moderators in the Philippines, India, Dublin and the United States to police the site and shore up its reputation. The moderators, who work for outsourcing companies such as Accenture and Cognizant, are contractors and typically receive less pay and fewer benefits than Facebook employees.

The decision to send the humans home and rely more on technology to police the sites concerned researchers.

“They haven’t made enough leaps and bounds in artificial intelligence to take away the best tool we have: human intelligence to do the discernment,” said Mary Gray, senior principal researcher at Microsoft Research and co-author of “Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass.”

“This is a mess.”

“We’re focused on keeping people safe and informed while making sure they can share information and connect with one another during this crisis,” said Facebook spokesman Drew Pusateri. “But as we said in a recent update on our content review, we anticipate that some mistakes will occur as we adjust to a modified workforce with a heavier reliance on automation.”

Facebook is making the change in large part because it considers the work too sensitive for third-party moderators to do from home because, among other reasons, it involves reviewing people’s private Facebook accounts. The company also acknowledges that it is a traumatic job and that workers would receive less support at home. Typically, moderators work in call centers where every movement, from breaks to keystrokes to judgments over content, is heavily managed and monitored.

The company says that may change as the coronavirus emergency evolves.

Facebook offered more clarity on its plans in a company blog post late Thursday. When users report content for violating policy, they will see a message explaining there are fewer reviews and Facebook is prioritizing content that poses the greatest potential harm.

“This means some reports will not be reviewed as quickly as they used to be and we will not get to some reports at all,” according to the blog post. Reducing the workforce also will alter the appeals process for users who believe their content was removed in error. People can still report that they disagree with Facebook’s decision.

We’ll “monitor that feedback to improve our accuracy, but we likely won’t review content a second time,” the company said.

About 95% of posts involving adult nudity, terrorism, child exploitation, suicide and self-harm are taken down by algorithms before Facebook users have a chance to report them, according to the company’s latest community standards enforcement report.

But for more nuanced categories of speech, the company’s systems often are less effective. Artificial intelligence catches 16% of posts involving bullying and harassment on Facebook, resulting in more than 80% of posts reported to the company. Artificial intelligence catches about 80% of hate speech.

Those numbers have led company officials to realize human judgment still is necessary to monitor more sensitive areas of speech, such as racism and political disinformation.

“I think there will always be people” making judgment calls over content, Zuckerberg said in a Washington Post interview last year.

Already there are signs of potential problems. Early last week, legitimate articles with accurate information about the virus were being removed from Facebook. Zuckerberg said it was caused by a bug in the company’s spam detection system that was unrelated to its triaging of content moderation during the pandemic. “The system is fixed, those posts are back up, and hopefully we won’t have that issue again anytime soon,” he said on the media call last week.

As Facebook moves to a more technology-driven response to policing content, it will prove a major test for the industry, said Jeff Kosseff, a cybersecurity professor and author of “The 26 Words That Created the Internet.”

“It will tell us a lot about the state of automated moderation,” Kosseff said. “We don’t really know what exactly tech companies are doing and how effective it is,” although they have been more transparent.

UCLA professor Sarah Roberts, author of “Behind the Screen: Content Moderation in the Shadows of Social Media,” said Facebook’s hand may have been forced. In Manila, the Filipino capital where Facebook indirectly employs thousands of content moderators, the government enacted a citywide quarantine.

Regardless of Facebook’s motivation, Roberts said the experience will reveal how much human reviewers impact our collective well-being and experience of the Internet. It may even shift the Silicon Valley ideology that places a primacy on problem-solving through engineering.

“We actually might not be able to code our way out of coronavirus,” she said.

Share

LEAVE A REPLY

Please enter your comment!
Please enter your name here