Terrorism Bill Puts Social Media Companies in Tough Spot

For the professional edge in your day-to-day practice, rely on the most timely, objective reporting on significant developments, trends, and emerging patterns in criminal law today—Criminal Law...

By Jessica DaSilva

Jan. 5 — Social media websites do not have the technological ability to comply with a bill that would require technology companies to report on “actual knowledge of terrorist activity,” according to industry analysts.

The bill, introduced by Sen. Dianne Feinstein (D-Calif.) in December 2015, spans a little more than two pages. The bill states that it is designed to mirror the existing law governing reporting requirements for electronic communication service providers in regard to child pornography—a comprehensive statute far exceeding the detail of Feinstein's bill, these analysts say.

No Tech Solution

Feinstein's bill is “technically and legally very different,” said Emma Llanso, director of the free expression project at the Center for Democracy and Technology.

Mainly, Llanso said a purely technological solution exists to bring companies into compliance with reporting requirements for child pornography storage.

Law enforcement agencies share sophisticated photo data called “hash algorithms” with companies that are required to report, Llanso explained. The companies then use the algorithms for ongoing searches of their users' content for image matches, she said.

Those image matches can include facial features, body characteristics, or a person's size, explained David LeDuc, senior director of public policy for the Software & Information Industry Association. The SIIA is a trade association representing digital companies affected by Feinstein's bill.

No such technological answer exists for Feinstein's bill, both Llanso and LeDuc said.

What the bill is asking these companies to do is a “fine-grained assessment of political views,” Llanso said.

Current Moderation

Most of these companies already engage in content moderation to enforce compliance with their terms of service, Llanso said. Those terms typically ban users engaging in hate speech, promoting violence, or making violent threats. Therefore, companies regularly review content for terrorist activities, she said.

LeDuc stated that review of current speech for terms of service compliance is not done with technology, but involves “many human beings and a lot of continuous monitoring by actual people.”

“Given the challenges of wide ranges of languages, slang, and dialects, it's simply not practical that there's any technological approach to monitor posts rising to the level of terrorist activities,” he said. “With respect to speech, sarcasm or irony or things that are satirical or mocking could be brought in under the guise of terrorism.”

For that reason, Llanso said she is concerned about the lack of details in the bill, which does not address consequences for companies' failures to report, consequences for users who are reported, how the government will treat false threats, and more.

“With no definition of ‘terrorist activity,' how do you find it?” Llanso asked.

No Additional Reporting

Tom Mentzer, communications director for Feinstein, told Bloomberg BNA in an e-mail that the bill does not require companies to monitor speech, so any technological concern “isn't an issue.”

“Companies already monitor their networks and routinely remove posts, delete accounts, etc. when they come across terrorism-related material,” Mentzer wrote. “They do this because they have rules that prohibit this type of material on their networks. That's what they would be required to report. If they come across it, they have to report it.”

When asked if social media companies currently failed to report actual knowledge of terrorist activities, Mentzer linked to a press release on Feinstein's website listing two examples of failures to report.

“Prior to his death, Syrian-based terrorist Junaid Hussein contacted and radicalized individuals and incited attempted terrorist attacks in the United States and the United Kingdom,” the release states. “He regularly used social media sites like Twitter and switched among multiple user accounts to continue posting after Twitter shut down individual accounts.”

The release also mentions one of the San Bernardino, Calif., shooters' Facebook posts announcing her allegiance to the Islamic State. Facebook removed the account but did not turn over the information to authorities, the press release states.

Potential Liability?

LeDuc said part of the problem with the bill is that companies see it as creating a liability for those that fail in their endeavors.

Llanso said this fear of liability could have two potential consequences: Large companies will try to comply and will need to dedicate more resources to come into compliance. Meanwhile, small companies without resources may become willfully blind so they never reach the intent element of the statute, she said.

Llanso added that even if companies used keyword filtering or automated processing to “narrow down the fire hose” of information, they would still collect an overwhelming amount of data through which private companies must sift.

Additionally, creating those algorithms presents a new risk in biased searches by focusing on language that Muslim users could be using, she said. That in itself could allow domestic terrorists, such as white supremacist groups, to evade scrutiny, she said.

Yet any initial screening would still result in human review of whatever data it produced, Llanso said.

“Speech with any hint of nuance or context dependence will need humans involved” to determine whether specific content is “an isolated statement from a user or an actual terrorist plot,” Llanso said.

Qualified Investigators?

Futhermore, LeDuc stated that employees of private Internet communication companies do not have the qualifications for sussing out actual terrorist threats. National security professionals are better equipped to investigate speech, and companies do not have access to well-trained individuals, he said.

“This is essentially putting companies in the position of being judges on free speech,” LeDuc said. “At the end of the day, it's fundamentally a limit on free speech.”

The discussion around the bill is mostly lacking in data necessary to understand how terrorists use social media, said Jeff Kosseff, an assistant professor in the U.S. Naval Academy's Cyber Science Department who wrote an article analyzing the bill. Without that, it will be difficult to determine what kind of regulation is within Congress's power, said Kosseff, who clarified he wasn't speaking on behalf of the Naval Academy.

Llanso compared the risks of a government and private company partnership to the U.K.'s Internet Referral Units currently under fire from the European Union. Those units have law enforcement officers reviewing speech on companies' websites and reporting terms-of-service violations when the officers believe individuals are engaging in terrorist activities.

This circumvents the protection of fundamental rights by allowing the government to recommend speech for censorship, she explained.

“It calls into question the democratic process and the rule of law system promoting fundamental rights when you see democratic governments moderating speech,” she said.

To contact the reporter on this story: Jessica DaSilva at jdasilva@bna.com

To contact the editor responsible for this story: C. Reilly Larson at rlarson@bna.com