Watch and Earn: Facebook Seeks Viewers Who Have Critical Eye

From labor disputes cases to labor and employment publications, for your research, you’ll find solutions on Bloomberg Law®. Protect your clients by developing strategies based on Litigation...

By Patricio Chile

Algorithms and artificial intelligence can only do so much to patrol the internet for offensive content. Nothing replaces a human being when it comes to making judgment calls about what’s appropriate and safe to share with the public, sources familiar with the process told Bloomberg BNA.

Facebook was confronted with another tragedy May 13 after a man in Tennessee set himself ablaze on Facebook Live. This came less than a month after footage of an elderly Cleveland man’s murder was posted on the platform.

Following the Cleveland incident, the company announced it would hire 3,000 people worldwide to monitor reports from users and remove content that violates its terms of service. They have a lot to monitor: About 7.9 million videos were viewed on Facebook in March 2017 alone, according to Bloomberg data. That is about a million more views than a year prior. The number of views spiked to about 10.2 million during the Thanksgiving and Christmas holiday season, according to the data.

Human moderators are necessary because “all the headscratching and quandaries will be in gray areas,” Ken Martin, the former senior vice president of program practices at CBS, told Bloomberg BNA May 15.

Hemanshu Nigam, founder and chief executive officer of SSP Blue, a cybersecurity company based in Santa Monica, Calif., agrees.

“Technology cannot process information like the human mind, so you still need humans,” Nigam told Bloomberg BNA May 15. “It’s going to be a combination of artificial intelligence, algorithm technology and human moderation.”

The more content there is online, the more “bad actors” there will be, Nigam, a former security chief at MySpace, said.

Complaints Reach Millions Per Week

Facebook’s new hires will be joining the 4,500 current members of the company’s community operations teams, a spokeswoman told Bloomberg BNA in an email May 15. The company declined to comment further on the hiring process but said it uses a combination of user reporting, human review and automation to enforce the site’s content standards.

The human moderators review millions of reports every week in more than 40 languages, the spokeswoman said. Moderators are available around the clock, and they cover all time zones, she said.

Twitter Inc. and Snap Inc., the company behind Snapchat, have also come under fire for hosting content depicting violent acts and hate speech.

Community guidelines for Twitter and its live streaming app Periscope ban all threats and promotion of violence, Liz Kelley, a spokeswoman for Twitter, told Bloomberg BNA in a May 16 email. When users report violations, it goes to a human moderator but nothing is removed until a violation is confirmed, she said.

“Any content that’s removed is done by a member of our teams who are available 24/7,” Kelley said.

Snap didn’t immediately respond to Bloomberg BNA’s requests for comment on how it responds to content violations.

Policing the Internet

The amount of content is growing rapidly, and moderators may find themselves outmatched, Shontavia Johnson, director of the Intellectual Property Law Center at Drake University, told Bloomberg BNA.

People have different opinions about what is inappropriate, she said. A gun in a video may or may not be inappropriate based on the context, she said.

“Maybe there are internal standards, but human judgment is fluid,” Johnson said.

Standards are also hard to pin down because technology is evolving and new platforms in the future may require a different set of rules, she said. But getting users, moderators and tech companies engaged will ensure progress on the matter.

“We have a unique opportunity to learn from this stuff,” she said.

Nigam, whose company provides content moderators for tech companies, said moderators have to be well trained to handle the amount and type of content that is out there.

Moderators are hired from diverse backgrounds—from online commerce to computer engineering—and are screened to ensure they’re knowledgeable about various cultural norms and global events, he said.

“Different people bring different perspectives,” he said.

“You want the broadest spectrum of experience possible,” Martin said when asked about hiring people to maintain standards. A curious person who “keeps up to date and knows what’s going in the world” is ideal for these positions, he said.

Martin said the standards department at CBS frequently hired from within the company—staffers who had worked as line editors or program coordinators and were knowledgeable about company and industry standards for permissible content.

Viewing Violence Takes a Toll

Providing support for these workers is also important. Moderators at SSP Blue are offered daily and weekly meetings where they can express their feelings about the content they view, Nigam said.

Facebook is hoping to foster a similar community standard, with moderators doing more than simply removing content. When a user posted a Facebook video threatening suicide in April, Facebook staff kept the video up and contacted law enforcement who were able to save the girl’s life, Facebook CEO Mark Zuckerberg said during the company’s earnings call May 3.

“So a lot of what we’re trying to do is not just about taking the content down but also helping people when they’re in need on the platform, and we take that very, very seriously,” Zuckerberg said.

To contact the reporter on this story: Patricio Chile in Washington at pchile@bna.com

To contact the editors responsible for this story: Peggy Aulino at maulino@bna.com; Terence Hyland at thyland@bna.com; Christopher Opfer at copfer@bna.com

Copyright © 2017 The Bureau of National Affairs, Inc. All Rights Reserved.

Request Labor & Employment on Bloomberg Law