INSIGHT: The Communications Decency Act Safe Harbor: Pendulum Swings In 2018

Keep up with the latest developments and legal issues in the telecommunications and emerging technology sectors, with exclusive access to a comprehensive collection of telecommunications law news,...

By Christopher A. Cole and Gabriel M. Ramsey, Crowell & Moring LLP

The Communication Decency Act of 1996 (CDA), 47 USC Section 230, provides a safe harbor to internet service providers and platforms, exempting them from liability based on the speech and content of their users. The intent of the safe harbor was to protect service providers who assumed an editorial role with regard to customer speech or content, and who would have otherwise become publishers, and legally responsible for libel and other torts committed by customers. Under Section 230, courts are precluded from “entertaining claims that would place a computer service provider in a publisher’s role.” Zeran v. America Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997) The CDA protects interactive computer service providers from liability arising out of such speech or content, even if it is defamatory or otherwise tortious. Section 230 defines interactive computer service providers as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.” This covers a wide variety of online services, including online platforms, internet service providers and social media sites.

One view of Section 230 is that it promotes an open internet, and protects the rights of internet service providers from the likely immense and subjective task of responding to demands to alter display of users’ unfettered ideas and opinions. The safe harbor has always been a double-edged sword. For platforms, the task of monitoring user content and responding to demands to remove defamatory or otherwise unlawful third-party content could easily become so onerous as to be infeasible. On the other hand, for parties aggrieved by unlawful postings, it can be immensely difficult to cause an unresponsive platform to remove even obviously wrongful—even demonstrably illegal—content or speech.

Over the course of 20 years, the law regarding CDA immunity became relatively settled. However, 2018 has brought some of the most notable developments in decades. A number of controversial rulings and developments, cutting several ways, have disrupted the legal landscape and three key examples are discussed below. Internet platforms claiming immunity and aggrieved parties asserting platform liability will have to consider this recent evolution.

Hassell v. Bird: CDA Section 230 Trumps Aiding and Abetting Principles

In early July 2018, the California Supreme Court ruled in Hassell v. Bird that Yelp, Inc. cannot be compelled by a court to remove defamatory third-party user reviews that had been posted to its site. Hassell v. Bird, 5 Cal. 5th 522, 2018 BL 234452 (2018) In a close 4-3 decision, the Court held that CDA Section 230 barred enforcement of the trial court’s order requiring Yelp to remove defamatory content posted by Bird, a user of its site, reasoning that forcing Yelp to remove even defamatory content “could interfere with and undermine the viability of an online platform.” This decision represents a strengthening of Section 230. But, it is controversial, as the only role that Yelp was being asked to play was to take the administrative step of removing content that had been determined by a court to be defamatory.

Background
Dawn Hassell sued her former client Ava Bird for defamation. For example, Bird had given Hassell a one-star review on Yelp, complained that Hassell was incompetent and had wrongly withdrawn from her personal injury matter. Hassell filed a defamation suit against Bird. Thereafter, Bird posted further comments on Yelp, stating that she was being threatened into deleting her review. Hassell’s informal requests to Yelp to remove Bird’s comments were unsuccessful. Hassell did not sue Yelp, but rather only sued Bird.

Bird defaulted, resulting in a judgment of defamation. The trial judge then issued an order against not only Bird, but also by its terms against any third-party publishers of Bird’s defamatory statements, including Yelp. The court’s order directed that Yelp was enjoined from publishing any future reviews from Bird’s accounts.

Yelp filed a motion to set aside the judgment, asserting that it was immune from liability under Section 230. Yelp’s attempts were rejected both by the trial court and the California Court of Appeal. The rationale of both courts was that because the order had been rendered against Bird, Yelp was bound as a third party through whom Bird, the originator of the defamatory speech, was acting. Yelp sought and was granted review by the California Supreme Court to resolve the question of whether the CDA immunity that ordinarily extends to claims for injunctive relief alleged directly against an interactive service provider also apply to an injunction that binds a nonparty.

Immunity Under the CDA Extends to Implicated Website Owners Even When the Website Is Not a Named Party
If Hassell had sued Yelp directly for the defamatory content, the case would have been dismissed under Section 230 as Yelp had no part in the creation of the content. Instead, Hassell properly proceeded against Bird, the party engaged in the defamatory speech. Hassell apparently assumed that any judgment and associated injunction against Bird could later be enforced against Yelp. The California Supreme Court held that Yelp did not have to comply with the order or assist in the removal of the adjudged defamatory content, finding that the CDA Section 230 immunity applied even though Yelp was not a party. The Court appeared to find that ordering a third party platform owner to assist in removing defamatory content would undermine the purpose of the CDA. The Court wrote, “…we must decide whether plaintiffs’ litigation strategy allows them to accomplish indirectly what Congress has clearly forbidden them to achieve directly. We believe the answer is no.” The Court found that a platform simply continuing to publish a user’s defamatory content once an order is obtained against the user, did not constitute “aiding and abetting” under California law and that the CDA safe harbor continued to apply, even after issuance of the order.

Advocates of an open internet will tout this decision as a win for both service providers and internet users more broadly. Indeed, the dozen amicus briefs filed in the case signal its importance and the ruling emphasizes the continued impact of the CDA. However, unanswered questions remain, including whether the result would be different where an online platform was the only party technically or practically capable of removing defamatory content subject to a court order. Moreover, website operators are not immune for liability based on their own actions and speech, or their own contributions to defamatory speech. Thus, the circumstances under which an online platform is actively involved enough in a user’s violation of a court order to subject the platform to aiding and abetting liability are not wholly defined by the decision. How those lines are drawn will continue to be refined on the facts of individual cases.

National Fair Housing Alliance et al v. Facebook: What Does It Mean to “Create” Content?

On that score, a recent 2018 case against Facebook, National Fair Housing Alliance et al. v. Facebook, revisits the tricky and constantly evolving question of when does a platform owner act as a co-creator of allegedly unlawful content or speech. National Fair Housing Alliance et al. v. Facebook, Case No. 1:18-cv-02689-JGK (SDNY) As online, mobile and artificial intelligence technologies continue to develop, the line between user and creator becomes much more ambiguous.

Background
In April 2018, housing advocacy groups filed a lawsuit against Facebook, alleging that the company violates the Fair Housing Act by allegedly “creating” content for landlords and real estate brokers in the context of ad “targeting” based on sex, family status, disability, national origin, and other protected characteristics. In particular, Facebook is alleged to have created “a pre-populated list of demographics, behaviors, and interests from which housing advertisers select in order to exclude certain home seekers from ever seeing their ads.” Id., Complaint. The plaintiffs allege that, based on ad targeting, individuals with protected characteristics are effectively excluded from housing opportunities because they never see the advertisements or listings for certain rental or home sales.

When Is A Platform A Content Creator?
The plaintiffs’ theory is that Facebook itself collects information regarding users and their behavior on the platform, including information concerning attributes that are protected under the Fair Housing Act. Plaintiffs further allege that Facebook’s algorithms automatically use this information to create categories of users that are purportedly “pre-populated” by Facebook and then advertisers are provided a tool enabling them to “include” or “exclude” particular users in a manner that omits housing advertisements based on protected characteristics (such as sex, disability, etc.).

The obvious question is whether Facebook is actually the “creator” of the allegedly discriminatory content, when the advertiser is the party affirmatively choosing the characteristics that will determine where the advertisement is directed. Prior authority in Fair Housing Council of San Fernando Valley et al. v. Roommates.com LLC, established that where a housing website required subscribers to answer specific questions and describe in their user profiles their preferences for roommates based on protected characteristics, the website could not claim Section 230 immunity. Fair Hous. Council of San Fernando Valley v. Roommates.com, LLC , 521 F.3d 1157, 2008 BL 69206 (9th Cir. 2008) The court observed that “[t]he CDA does not grant immunity for inducing third parties to express illegal preferences. Roommate’s own acts—posting the questionnaire and requiring answers to it—are entirely its doing and thus section 230 of the CDA does not apply to them.” Id., 521 F.3d at 1165, 2008 BL 69206 at 4 At the other end of the spectrum, in Chicago Lawyers’ Comm. For Civil Rights Under Law, Inc. v. Craigslist, Inc., Section 230 immunity was found to apply to the Craigslist website because there was no effort to categorize or classify users or to review or promote postings. Chi. Lawyers’ Comm. for Civil Rights Under Law, Inc. v. Craigslist, Inc., 519 F.3d 666, 2008 BL 53043 (7th Cir. 2008).

However, there are factual differences between the Roommates.com case and Facebook’s ad targeting tools. For example, Facebook provides ad targeting options that are available on Facebook’s platform for use by all advertisers for any type of ad, and it is solely the third-party advertiser who chooses to use the ad targeting tools with respect to particular ads or in a way that results in publication of a discriminatory housing ad. In other words, the ad targeting tools are very general purpose and neutral tools, which Facebook arguably has no part in deploying in particular housing advertisements or housing advertisements in general. Rather, that is arguably only the advertiser. This is precisely what Facebook argued in a recent motion to dismiss based on CDA Section 230 immunity grounds. There is some support for the position, in a number of cases where CDA immunity applied to “neutral tools” that could equally be used for both lawful and unlawful purposes.

However, shortly thereafter, the Department of Justice and the Department of Housing and Urban Development filed a statement of interest opposing Facebook’s motion. The government argues that Facebook’s development of ad targeting categories, and their general availability (including in housing ads) is sufficient to vitiate Section 230 immunity, similar to the Roommates.com case. Almost immediately after this filing, Facebook announced that this fall it plans to remove over 5,000 advertising targeting options “to help prevent misuse.” Facebook stated that it planned to do so notwithstanding that “these options have been used in legitimate ways to reach people interested in a certain product or service.” While the goal is obviously of critical importance, the challenge in crafting a technical solution accentuates the risk of being either overinclusive or underinclusive in managing access to content.

However, the dispute is still ongoing, and it will be important to watch how CDA immunity unfolds in the context of the architecture of ad targeting tools and whether Facebook’s changes will satisfy the plaintiffs and the government. The case could have far-reaching implications for a wide variety of websites and advertising business models.

Congress Steps In: Policy-Based Legislative Exclusions From CDA Section 230 Safe Harbor

The upside of incremental judicial evolution, on particular sets of facts, regarding whether a website or online platform is a passive conduit or a creator, is that the law regarding CDA Section 230 can adapt to new factual and technical realities as they arise. However, the government has taken an increasingly active role in pushing back on CDA immunity, as can be seen in the recent Facebook case. Even more dramatically, in the spring of 2018, Congress passed and the President signed an amendment to Section 230, carving out an exception to allow enforcement against online service providers that knowingly host third-party content that promotes or facilitates sex-trafficking through their websites or platforms. The provision, H.R. 1865, the “Allow States and Victims to Fight Online Sex-trafficking Act of 2017” is commonly known as “FOSTA.” Section 230 advocates had argued that this bill would spell the end of Section 230 as we know it, but given the sex-trafficking context and obvious effort to deploy this new tool against obvious publishers of prostitution and sex-trafficking content, likely picked the wrong hill on which to die.

Background
Congress intended with FOSTA to create a category of responsibility by website owners over content related to sex-trafficking on their sites or platforms. The law was spurred by a very public conversation that Backpage.com, an online “classifieds” website was harboring in a concentrated manner, organizers of the illegal sex trade. A number of sex-trafficking victims had brought civil actions against Backpage concerning its hosting of third party postings and the website had been successful in dismissing those suits, claiming that the content was wholly that of its users and, therefore, CDA Section 230 immunized the website itself.

FOSTA and the Scope of Obligations to Police Sex-Trafficking Activity
FOSTA’s preamble states: “[CDA Section 230] was never intended to provide legal protection to websites that unlawfully promote and facilitate prostitution and websites that facilitate traffickers in advertising the sale of unlawful sex acts with sex-trafficking victims.” The law removes immunity for online platforms with regard to criminal charges for anti sex-trafficking statutes and civil suits brought by victims against online service providers.

The policy imperative is clear and the principle of the exception seems straightforward, at least superficially. That is, when websites or other service providers obtain “knowledge” of content promoting or facilitating sex-trafficking activity on their platform they must act to remove such content. However, the implementation details may be quite more difficult. For example, in the case of large websites or platforms with millions of users it may be extremely difficult to build architectures to manage the risk of such activity. There are also potentially very subjective decisions and matters of interpretation rendering great ambiguity regarding what content is actually facilitating illegal sex-trafficking activity and when the online operator has “knowledge.”

One possible outcome is that sites and services may overreact, removing entire categories of functionality, such as “personals” websites or similar messaging contexts. For example, the Craigslist website removed its personals sections entirely in response to the law. Human reviewers of content would be expensive and still subject to mistakes and ambiguity. Other online services might use automated tools in an attempt to predict and remove such content. However, excluding too little or too much content is a risk with this approach as well. Moreover, given the knowledge requirement, some service providers may choose not to moderate content entirely, to avoid arguments that they were aware of the illegal activity. This would seem contrary to the intent of the statute in the first place.

The Electronic Frontier Foundation recently filed suit to block enforcement of FOSTA, arguing that it forces compliance steps that necessarily censor lawful opinions or content. For example, EFF asserts that the law could be interpreted broadly enough to create liability for speech by advocates attempting to assist sex workers. EFF’s challenges are based on the First and Fifth Amendments, and assert that undefined terms such as “facilitate,” “promote,” “contribute to sex-trafficking,” “assisting,” or supporting” are vague and ambiguous, posing serious Constitutional concerns.

It remains to be seen whether this exception to CDA immunity will endure, be pared back or invalidated entirely. However, the one thing that the statute and associated debate make clear is that, even with clear policy goals, it is the architectural and practical realities that govern how CDA Section 230 immunity will apply. Regardless what happens with FOSTA or future policy-based exceptions, the courts will continue to have a very important role defining CDA immunity on the facts of particular cases.

The Future
For advocates who represent parties seeking to remove unlawful content from social media platforms or other large interactive sites, Section 230 can prove a maddening obstacle. Even if the platform is receptive to engaging in voluntary removal (and almost all spell out complaints and inquiries to which they will respond), the responsiveness of many platforms leaves much to be desired. The immunity conferred by Section 230 has, in some instances, fostered a sense of arrogance among some platforms. In the worst of these cases, “bulletproof” hosting companies may become havens for cybercrime and other forms of criminal activity. The Hassell case is a good example of the viewpoint more generally, where, despite a clear finding of defamation, the platform itself was affirmatively unresponsive.

At the same time, the increasing prevalence of algorithms that can weed out objectionable and illegal content automatically has lessened the concerns about the unmanageable human review burden that once spurred CDA Section 230’s creation. Adding to this, there is swirling controversy regarding political bots and election interference, which the major social media platforms are attempting to defuse.

Given these factors, we may well see continuing legislative efforts to pare back Section 230 immunity. If the platforms wish to avoid the restoration of causes of action that had been precluded by Section 230, they would do well to tend to the obvious, unremedied abuses.

-------

Author Information

Christopher A. Cole is the co-chair of Crowell & Moring’s Advertising & Product Risk Management (APRM) Group, a team that provides interdisciplinary solutions to companies facing challenging competitive and regulatory issues. Based in Washington, DC, Chris focuses on false advertising, unfair competition, reputation, brand disparagement, and intellectual property.

Gabriel M. Ramsey is a partner in Crowell & Moring’s San Francisco office. Gabe is a member of the Litigation, Intellectual Property, and Privacy & Cybersecurity groups, and focuses his practice on complex litigation involving intellectual property and cybersecurity.

Request Tech & Telecom on Bloomberg Law