You might be asking: WTF is CDA?

This American law is one of the most powerful forces shaping the Internet as we know it today. The bulk of Section 230 of the Communications Decency Act (“CDA 230”) is a concise twenty-six words:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”

Recently, the Internet has been suffering from an “infodemic,” a crisis of misinformation and disinformation spreading faster than the COVID-19 pandemic. This interactive explainer will help you answer the question of how CDA 230 influences platform accountability, misinformation, and disinformation on the Internet today.

This website is a project from the Berkman Klein Assembly student fellowship.

Let's find out where you stand on CDA 230

Based on your answers to the next six hypothetical cases, we’ll tell you if you’re a CDA 230 Absolutist, Reformer, or Abolitionist.

Note: All of the posts used in this explainer have been fabricated for educational purposes and should be considered within the context of answering only these questions. The user identities and handles are fictional and the posts’ content are not based on, nor intended to imply, any real-world developments.

The Absolutist
The Reformer
The Abolitionist

Moderation as censorship?

Moderation as censorship?
Note: This post has been fabricated.

Hypothetical Case

  • The Truth Tribune, a conservative online magazine, tweeted a link to its 500,000 followers of this article by an unlicensed physician promoting <span data-tooltip="Coronavirus parties are gatherings of individuals who intend to intentionally infect themselves with COVID-19.">coronavirus parties</span>.
  • Public health authorities, including the Center for Disease Control have <span data-tooltip="<a href='https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/social-distancing.html'>See more at the CDC</a>">issued guidance</span> that such social gatherings are dangerous.
  • Twitter took down the tweet because it violated their <span data-tooltip="<a href='https://blog.twitter.com/en_us/topics/company/2020/An-update-on-our-continuity-strategy-during-COVID-19.html'>See more about Twitter’s misinformation policies</a>">misinformation policies</span>.
  • The Truth Tribune sues Twitter for censorship based on political views.

/6

1

Question

If you were in charge at Twitter, what would you do about this tweet?
So, what should the law say about this?

Your answer indicates that you support CDA 230 immunity for platforms; platforms should be free to moderate the content on their websites without being afraid of users suing them when platforms remove their posts.

Here’s some background:

Section 230 of the Communications Decency Act of 1996 (“CDA 230”) was designed to give platforms the flexibility and protection to moderate content on their platforms.

Prior to the enactment of CDA 230, Internet <span  data-tooltip="All-encompassing for website, information intermediary, etc.">platforms</span>, such as Twitter in the example above, faced a difficult tradeoff when deciding whether to remove questionable posts by <span data-tooltip="Speaker or poster of content on a platform, individual or organization">users</span>. If the platform removed some undesirable content posted by their users on their website, the law viewed the website as a "publisher" and held them liable for any content that remained on the website. The drafters of CDA 230 thought making platforms publishers created the wrong incentives because it discouraged platforms from cleaning up their websites.

CDA 230 intended to realign incentives by immunizing platforms from lawsuits even if they voluntarily took actions to remove objectionable content, or if objectionable posts remained up that they could not get around to. They intended to offer this protection to “Good Samaritans,” platforms who screened and removed objectionable content in good faith. <span data-tooltip="Read the <a href='https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online-censorship/'>Executive Order on Preventing Online Censorship</a>">Recent proposals</span> for reform would restore liability for platforms that edit content in a manner that demonstrates political bias.

  • Before CDA 230 was enacted, the case governing platform liability was Stratton Oakmont Inc. v. Prodigy Services Co. (N.Y. Sup. Ct. 1995). The plaintiff, a securities firm, sued Prodigy, an early online service provider, which hosted a financial bulletin board. One of Prodigy’s users had written some defamatory comments on the online board about the plaintiffs’ firm. Prodigy used a software to filter out profanity on its bulletin boards, but it did not catch the defamatory content. The firm sued Prodigy for defamation. The trial court reasoned that because Prodigy filtered some messages on its board, it was a publisher of the defamatory content, and not merely a distributor, or passive conduit. Legislators responded to this decision with Section 230 of the Communications Decency Act. They worried that online service providers would respond to the Prodigy decision by not filtering out profanity—or worse, pornography—at all.
  • ‍Domen v. Vimeo (S.D.N.Y. 2020) Plaintiffs were a religious non-profit organization and an individual who claimed he formerly identified as homosexual, but now identified as heterosexual based on his Christian faith. Plaintiffs had published videos on Vimeo’s video hosting platform; Vimeo flagged their account/videos for review for violating Vimeo’s policy against promoting sexual orientation change efforts. Plaintiffs believed their account was deleted to censor them from speaking about their preferred sexual orientation and religious beliefs, because Vimeo had not removed other similar videos relating to sexual orientation. Plaintiffs complained Vimeo acted based on their discriminatory beliefs in censoring plaintiffs and violating their right to freedom of speech. This case was dismissed under <span data-tooltip='"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."'>Section 230(c)(1)</span> for the actions Vimeo took as a “publisher” and under <span data-tooltip='"No provider or user of an interactive computer service shall be held liable on account of...any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."'>Section 230(c)(2)</span> for the actions Vimeo took to restrict access to content it found “objectionable.”
  • See also Federal Agency of News LLC v. Facebook (2020 WL 137154) (N.D. Cal. 2020); Sikhs for Justice v. Facebook (144 F. Supp. 3d 1088) (N.D. Cal. 2015).

Your answer indicates that you believe CDA 230 immunity should be revoked for platforms that selectively remove content on their platforms. You could believe that platforms must exercise their editorial capacity free from political bias.

Here’s some background:

Section 230 of the Communications Decency Act of 1996 (“CDA 230”) was designed to give platforms the flexibility and protection to moderate content on their platforms.

Prior to the enactment of CDA 230, Internet <span  data-tooltip="All-encompassing for website, information intermediary, etc.">platforms</span>, such as Twitter in the example above, faced a difficult tradeoff when deciding whether to remove questionable posts by <span data-tooltip="Speaker or poster of content on a platform, individual or organization">users</span>. If the platform removed some undesirable content posted by their users on their website, the law viewed the website as a "publisher" and held them liable for any content that remained on the website. The drafters of CDA 230 thought making platforms publishers created the wrong incentives because it discouraged platforms from cleaning up their websites.

CDA 230 intended to realign incentives by immunizing platforms from lawsuits even if they voluntarily took actions to remove objectionable content, or if objectionable posts remained up that they could not get around to. They intended to offer this protection to “Good Samaritans,” platforms who screened and removed objectionable content in good faith. <span data-tooltip="Read the <a href='https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online-censorship/'>Executive Order on Preventing Online Censorship</a>">Recent proposals</span> for reform would restore liability for platforms that edit content in a manner that demonstrates political bias.

  • Before CDA 230 was enacted, the case governing platform liability was Stratton Oakmont Inc. v. Prodigy Services Co. (N.Y. Sup. Ct. 1995). The plaintiff, a securities firm, sued Prodigy, an early online service provider, which hosted a financial bulletin board. One of Prodigy’s users had written some defamatory comments on the online board about the plaintiffs’ firm. Prodigy used a software to filter out profanity on its bulletin boards, but it did not catch the defamatory content. The firm sued Prodigy for defamation. The trial court reasoned that because Prodigy filtered some messages on its board, it was a publisher of the defamatory content, and not merely a distributor, or passive conduit. Legislators responded to this decision with Section 230 of the Communications Decency Act. They worried that online service providers would respond to the Prodigy decision by not filtering out profanity—or worse, pornography—at all.
  • ‍Domen v. Vimeo (S.D.N.Y. 2020) Plaintiffs were a religious non-profit organization and an individual who claimed he formerly identified as homosexual, but now identified as heterosexual based on his Christian faith. Plaintiffs had published videos on Vimeo’s video hosting platform; Vimeo flagged their account/videos for review for violating Vimeo’s policy against promoting sexual orientation change efforts. Plaintiffs believed their account was deleted to censor them from speaking about their preferred sexual orientation and religious beliefs, because Vimeo had not removed other similar videos relating to sexual orientation. Plaintiffs complained Vimeo acted based on their discriminatory beliefs in censoring plaintiffs and violating their right to freedom of speech. This case was dismissed under <span data-tooltip='"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."'>Section 230(c)(1)</span> for the actions Vimeo took as a “publisher” and under <span data-tooltip='"No provider or user of an interactive computer service shall be held liable on account of...any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."'>Section 230(c)(2)</span> for the actions Vimeo took to restrict access to content it found “objectionable.”
  • See also Federal Agency of News LLC v. Facebook (2020 WL 137154) (N.D. Cal. 2020); Sikhs for Justice v. Facebook (144 F. Supp. 3d 1088) (N.D. Cal. 2015).

Your answer indicates that you oppose CDA 230 immunity for platforms. You could think that platforms owe their audiences an obligation to not spread harmful content.

Here’s some background:

Section 230 of the Communications Decency Act of 1996 (“CDA 230”) was designed to give platforms the flexibility and protection to moderate content on their platforms.

Prior to the enactment of CDA 230, Internet <span  data-tooltip="All-encompassing for website, information intermediary, etc.">platforms</span>, such as Twitter in the example above, faced a difficult tradeoff when deciding whether to remove questionable posts by <span data-tooltip="Speaker or poster of content on a platform, individual or organization">users</span>. If the platform removed some undesirable content posted by their users on their website, the law viewed the website as a "publisher" and held them liable for any content that remained on the website. The drafters of CDA 230 thought making platforms publishers created the wrong incentives because it discouraged platforms from cleaning up their websites.

CDA 230 intended to realign incentives by immunizing platforms from lawsuits even if they voluntarily took actions to remove objectionable content, or if objectionable posts remained up that they could not get around to. They intended to offer this protection to “Good Samaritans,” platforms who screened and removed objectionable content in good faith. <span data-tooltip="Read the <a href='https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online-censorship/'>Executive Order on Preventing Online Censorship</a>">Recent proposals</span> for reform would restore liability for platforms that edit content in a manner that demonstrates political bias.

  • Before CDA 230 was enacted, the case governing platform liability was Stratton Oakmont Inc. v. Prodigy Services Co. (N.Y. Sup. Ct. 1995). The plaintiff, a securities firm, sued Prodigy, an early online service provider, which hosted a financial bulletin board. One of Prodigy’s users had written some defamatory comments on the online board about the plaintiffs’ firm. Prodigy used a software to filter out profanity on its bulletin boards, but it did not catch the defamatory content. The firm sued Prodigy for defamation. The trial court reasoned that because Prodigy filtered some messages on its board, it was a publisher of the defamatory content, and not merely a distributor, or passive conduit. Legislators responded to this decision with Section 230 of the Communications Decency Act. They worried that online service providers would respond to the Prodigy decision by not filtering out profanity—or worse, pornography—at all.
  • ‍Domen v. Vimeo (S.D.N.Y. 2020) Plaintiffs were a religious non-profit organization and an individual who claimed he formerly identified as homosexual, but now identified as heterosexual based on his Christian faith. Plaintiffs had published videos on Vimeo’s video hosting platform; Vimeo flagged their account/videos for review for violating Vimeo’s policy against promoting sexual orientation change efforts. Plaintiffs believed their account was deleted to censor them from speaking about their preferred sexual orientation and religious beliefs, because Vimeo had not removed other similar videos relating to sexual orientation. Plaintiffs complained Vimeo acted based on their discriminatory beliefs in censoring plaintiffs and violating their right to freedom of speech. This case was dismissed under <span data-tooltip='"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."'>Section 230(c)(1)</span> for the actions Vimeo took as a “publisher” and under <span data-tooltip='"No provider or user of an interactive computer service shall be held liable on account of...any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."'>Section 230(c)(2)</span> for the actions Vimeo took to restrict access to content it found “objectionable.”
  • See also Federal Agency of News LLC v. Facebook (2020 WL 137154) (N.D. Cal. 2020); Sikhs for Justice v. Facebook (144 F. Supp. 3d 1088) (N.D. Cal. 2015).

In the above example, we asked you to consider what should happen when a platform removes content classified as misinformation, impacting the free speech of a user.

In the examples that follow, consider what happens when a platform does nothing.

Who is a publisher? (Yelp)

Who is a publisher? (Yelp)
Note: This post has been fabricated.

Hypothetical Case

  • In March, Governor Gavin Newsom issued a <span data-tooltip="<a href='https://www.wired.com/story/whats-shelter-place-order-whos-affected/'>See more about shelter-in-place orders</a>">“shelter-in-place” order</span> to ensure that Californians adhered to strict social distancing guidelines during the COVID-19 pandemic. The order was legally enforceable with criminal penalties.
  • Yelp wants to help businesses connect with consumers during the lockdown. In order to remain listed on the site, restaurants have to fill out a form indicating their lockdown hours.
  • Soup King fills out Yelp’s form. However, Soup King later changes its mind about staying open during the pandemic and does not update Yelp.
  • Casey, a San Francisco resident, sees Soup King is open for takeout on Yelp and decides to head over to place a to-go order. On the way, the police stop Casey and ask what their “essential errand” is. When Casey says they are visiting Soup King, the police tell them it is closed and issue Casey a ticket for violating the shelter-in-place order.
  • Casey sues Yelp for spreading false information that Soup King was open during the lockdown that resulted in their ticket.

/6

2

Question

If you were in charge at Yelp, what would you do?
So, what should the law say about this?

Your answer indicates that you support CDA 230 immunity for platforms.

According to this view, <span data-tooltip="Person who was harmed by content on platform">individuals</span> should not be able to go after platforms for content posted by users online. You might agree that Casey should sue Soup King for posting inaccurate hours online, but not Yelp.

Here’s some background:

CDA 230 immunizes platforms from being treated as “publishers” or “information content providers.” But should platforms always be immune for content posted on their sites? What if they specifically ask or encourage users to post certain information, like their hours during coronavirus lockdowns? Courts have rarely concluded that platforms are responsible for curating content on their websites. Even if websites prompt users to enter certain information to access the site, courts do not consider platforms as “publishers” or “speakers” of user-provided content.

  • Though courts generally find that platforms are not publishers of the content shared by users of their platforms, a platform can direct a user’s speech to the extent that they should be considered a publisher. We examine a case on either side of the line below.
  • Doe v. SexSearch.com (N.D. Ohio 2007) – The plaintiff was arrested for having sex with a minor that he met on SexSearch.com. On the website, the minor promised that she was of age, not her actual age: 14-years-old. The defendant, website SexSearch, stated in its terms and conditions that they would "review, verify and approve" website profiles. Citing a similar case in which the plaintiff was a victim of sexual assault from lying about her age to use MySpace, the court dismissed the case. (Doe v. Myspace (W.D. Tex. 2007). It concluded that SexSearch was not a “publisher” of the content that resulted in the plaintiff’s arrest.
  • Fair Housing Council of San Fernando Valley v. Roommates.com, LLC (9th Cir. 2008) – Roommates.com required third parties to fill out questionnaires that induced respondents to express preferences in violation of fair housing laws. The court held that these forms were not protected by Section 230(c)(1). Instead, Roommates.com could be liable as an “information content provider” because it co-developed profile content. (“Roommate is “responsible” at least “in part” for each subscriber's profile page, because every such page is a collaborative effort between Roommate and the subscriber.”)
  • Roommates.com might have come out differently from SexSearch.com because Roommates created–and required users to answer–questions that constituted unlawful discrimination. Instead, SexSearch.com failed to monitor the content posted by users that was not in and of itself unlawful.

Your answer indicates that you believe CDA 230 immunity should be revoked for platforms for some content on their websites. If platforms take a role in producing content—for instance, by prompting users to disclose certain information—then, <span data-tooltip="Person who was harmed by content on platform">individuals</span> should be able to hold the platform accountable for the effect of that misinformation.

Here’s some background:

CDA 230 immunizes platforms from being treated as “publishers” or “information content providers.” But should platforms always be immune for content posted on their sites? What if they specifically ask or encourage users to post certain information, like their hours during coronavirus lockdowns? Courts have rarely concluded that platforms are responsible for curating content on their websites. Even if websites prompt users to enter certain information to access the site, courts do not consider platforms as “publishers” or “speakers” of user-provided content.

  • Though courts generally find that platforms are not publishers of the content shared by users of their platforms, a platform can direct a user’s speech to the extent that they should be considered a publisher. We examine a case on either side of the line below.
  • Doe v. SexSearch.com (N.D. Ohio 2007) – The plaintiff was arrested for having sex with a minor that he met on SexSearch.com. On the website, the minor promised that she was of age, not her actual age: 14-years-old. The defendant, website SexSearch, stated in its terms and conditions that they would "review, verify and approve" website profiles. Citing a similar case in which the plaintiff was a victim of sexual assault from lying about her age to use MySpace, the court dismissed the case. (Doe v. Myspace (W.D. Tex. 2007). It concluded that SexSearch was not a “publisher” of the content that resulted in the plaintiff’s arrest.
  • Fair Housing Council of San Fernando Valley v. Roommates.com, LLC (9th Cir. 2008) – Roommates.com required third parties to fill out questionnaires that induced respondents to express preferences in violation of fair housing laws. The court held that these forms were not protected by Section 230(c)(1). Instead, Roommates.com could be liable as an “information content provider” because it co-developed profile content. (“Roommate is “responsible” at least “in part” for each subscriber's profile page, because every such page is a collaborative effort between Roommate and the subscriber.”)
  • Roommates.com might have come out differently from SexSearch.com because Roommates created–and required users to answer–questions that constituted unlawful discrimination. Instead, SexSearch.com failed to monitor the content posted by users that was not in and of itself unlawful.

Your answer indicates that you oppose CDA 230 immunity for platforms. According to this view, platforms should actively monitor and correct misinformation on their platforms. You might believe that Yelp claimed to verify hours of operation for businesses on its platform during the lockdown. and misled <span data-tooltip="Person who was harmed by content on platform">individuals</span>, such as Casey.

Here’s some background:

CDA 230 immunizes platforms from being treated as “publishers” or “information content providers.” But should platforms always be immune for content posted on their sites? What if they specifically ask or encourage users to post certain information, like their hours during coronavirus lockdowns? Courts have rarely concluded that platforms are responsible for curating content on their websites. Even if websites prompt users to enter certain information to access the site, courts do not consider platforms as “publishers” or “speakers” of user-provided content.

  • Though courts generally find that platforms are not publishers of the content shared by users of their platforms, a platform can direct a user’s speech to the extent that they should be considered a publisher. We examine a case on either side of the line below.
  • Doe v. SexSearch.com (N.D. Ohio 2007) – The plaintiff was arrested for having sex with a minor that he met on SexSearch.com. On the website, the minor promised that she was of age, not her actual age: 14-years-old. The defendant, website SexSearch, stated in its terms and conditions that they would "review, verify and approve" website profiles. Citing a similar case in which the plaintiff was a victim of sexual assault from lying about her age to use MySpace, the court dismissed the case. (Doe v. Myspace (W.D. Tex. 2007). It concluded that SexSearch was not a “publisher” of the content that resulted in the plaintiff’s arrest.
  • Fair Housing Council of San Fernando Valley v. Roommates.com, LLC (9th Cir. 2008) – Roommates.com required third parties to fill out questionnaires that induced respondents to express preferences in violation of fair housing laws. The court held that these forms were not protected by Section 230(c)(1). Instead, Roommates.com could be liable as an “information content provider” because it co-developed profile content. (“Roommate is “responsible” at least “in part” for each subscriber's profile page, because every such page is a collaborative effort between Roommate and the subscriber.”)
  • Roommates.com might have come out differently from SexSearch.com because Roommates created–and required users to answer–questions that constituted unlawful discrimination. Instead, SexSearch.com failed to monitor the content posted by users that was not in and of itself unlawful.

Defamation by high-profile individuals

Defamation by high-profile individuals
Note: This post has been fabricated.

Hypothetical Case

  • During the pandemic, <span data-tooltip="<a href='https://www.nytimes.com/2020/04/01/us/politics/coronavirus-fauci-security.html'>Read more about actual threats to Dr. Fauci.</a>">a cybermob sought to discredit Dr. Anthony Fauci</span>, the United States’s leading infectious disease expert on President Trump’s COVID-19 task force.
  • The cybermob spread rumors that Dr. Fauci was having a homosexual affair in the workplace. Senator Smith, an elected member of Congress, amplified these rumors via his Twitter account.
  • Twitter users reported that the tweets about Dr. Fauci’s affair were fake, making the tweets defamatory.
  • Twitter refused to take down Senator Smith’s tweet because of <span data-tooltip='<a href="https://blog.twitter.com/en_us/topics/compaworlny/2018/world-leaders-and-twitter.html">See more about Twitter’s guidelines</a>'>its guidelines on tweets by world leaders</span>. Twitter thinks that this information is newsworthy to Senator Smith’s constituents.
  • Dr. Fauci sues Twitter and Senator Smith for defamation.

/6

3

Question

If you were in charge at Twitter, what would you do about Senator Smith’s tweet?
So, what should the law say about this?

Your answer indicates that you support CDA 230 immunity for platforms.

Platforms should be encouraged to craft their own content moderation policies. CDA 230 immunity allows them to do this by protecting them from individuals’ lawsuits when they get it wrong.

Here’s some background:

Platforms value CDA 230 immunity because it protects them from facing lawsuits brought against “publishers,” such as defamation. If the New York Times published Senator Smith’s opinion of Dr. Fauci, Dr. Fauci would have to prove that the newspaper knew <span data-tooltip="<a href='https://www.law.cornell.edu/supremecourt/text/376/254'>See New York Times v. Sullivan (1964)</a>">the statement was false or published it recklessly without investigating its accuracy</span>. Even though the standard of proving “actual malice” for defamation claims is a high bar, damages awarded to plaintiffs for defamation can be high enough to put Internet companies out of business.

Instead of thoroughly checking content on their websites like news organizations, Internet platforms, such as Twitter, keep up unverified rumors, arguing that people deserve to “see and debate” controversial tweets by world leaders. However, this policy might be contributing to the infodemic, because <span data-tooltip="<a href='https://www.theguardian.com/media/2020/apr/08/influencers-being-key-distributors-of-coronavirus-fake-news?CMP=share_btn_tw.'>Read more about this study here</a>">studies</span> show that influencers with large followings spread misinformation about COVID-19 faster than it can be fact-checked.

  • Pace v. Baker-White (E.D. Pa. 2020) – The provider of software that helped Internet users filter unwanted content from their computers sued a competitor, alleging violations of New York state law and the Lanham Act's false advertising provision. The plaintiff alleged that the competitor configured its software to block users from accessing the provider's software, in order to divert the provider's customers. Section 230(c), the “Good Samaritan” provision, does not immunize blocking and filtering decisions driven by anticompetitive animus. Thus, the court found that the defendant did not enjoy CDA 230 immunity.
  • Zeran v. AOL (4th Cir. 1997) – A third-party posted the plaintiff’s phone number on AOL message boards and told others to contact the plaintiff for tasteless t-shirts/paraphanelia regarding the Oklahoma bombings. The plaintiff notified AOL that its website was hosting defamatory content. The court declined to assign liability to the website for hosting defamatory content.
  • Blumenthal v. Drudge (D.D.C 1998) – AOL paid Drudge to make a website available to AOL customers and published his defamatory article. The plaintiffs, who were defamed by Drudge’s article, sued both AOL for hosting the website and Drudge for publishing the content. The Court held that CDA 230 was meant to immunize web hosts from exactly this kind of liability since AOL had no hand in creating the offensive content.

Your answer indicates that you believe CDA 230 immunity should sometimes protect platforms.

Platforms should be encouraged to craft their own content moderation policies. Sometimes, they may leave up defamatory posts to serve other policy objectives, like supporting the free speech of public figures.

Here’s some background:

Platforms value CDA 230 immunity because it protects them from facing lawsuits brought against “publishers,” such as defamation. If the New York Times published Senator Smith’s opinion of Dr. Fauci, Dr. Fauci would have to prove that the newspaper knew <span data-tooltip="<a href='https://www.law.cornell.edu/supremecourt/text/376/254'>See New York Times v. Sullivan (1964)</a>">the statement was false or published it recklessly without investigating its accuracy</span>. Even though the standard of proving “actual malice” for defamation claims is a high bar, damages awarded to plaintiffs for defamation can be high enough to put Internet companies out of business.

Instead of thoroughly checking content on their websites like news organizations, Internet platforms, such as Twitter, keep up unverified rumors, arguing that people deserve to “see and debate” controversial tweets by world leaders. However, this policy might be contributing to the infodemic, because <span data-tooltip="<a href='https://www.theguardian.com/media/2020/apr/08/influencers-being-key-distributors-of-coronavirus-fake-news?CMP=share_btn_tw.'>Read more about this study here</a>">studies</span> show that influencers with large followings spread misinformation about COVID-19 faster than it can be fact-checked.

  • Pace v. Baker-White (E.D. Pa. 2020) – The provider of software that helped Internet users filter unwanted content from their computers sued a competitor, alleging violations of New York state law and the Lanham Act's false advertising provision. The plaintiff alleged that the competitor configured its software to block users from accessing the provider's software, in order to divert the provider's customers. Section 230(c), the “Good Samaritan” provision, does not immunize blocking and filtering decisions driven by anticompetitive animus. Thus, the court found that the defendant did not enjoy CDA 230 immunity.
  • Zeran v. AOL (4th Cir. 1997) – A third-party posted the plaintiff’s phone number on AOL message boards and told others to contact the plaintiff for tasteless t-shirts/paraphanelia regarding the Oklahoma bombings. The plaintiff notified AOL that its website was hosting defamatory content. The court declined to assign liability to the website for hosting defamatory content.
  • Blumenthal v. Drudge (D.D.C 1998) – AOL paid Drudge to make a website available to AOL customers and published his defamatory article. The plaintiffs, who were defamed by Drudge’s article, sued both AOL for hosting the website and Drudge for publishing the content. The Court held that CDA 230 was meant to immunize web hosts from exactly this kind of liability since AOL had no hand in creating the offensive content.

Your answer indicates that you oppose CDA 230 immunity for platforms.

According to this view, platforms should affirmatively monitor their websites for harmful content. You might believe that Twitter has an obligation to individuals to fact-check the information that it spreads.

Here’s some background:

Platforms value CDA 230 immunity because it protects them from facing lawsuits brought against “publishers,” such as defamation. If the New York Times published Senator Smith’s opinion of Dr. Fauci, Dr. Fauci would have to prove that the newspaper knew <span data-tooltip="<a href='https://www.law.cornell.edu/supremecourt/text/376/254'>See New York Times v. Sullivan (1964)</a>">the statement was false or published it recklessly without investigating its accuracy</span>. Even though the standard of proving “actual malice” for defamation claims is a high bar, damages awarded to plaintiffs for defamation can be high enough to put Internet companies out of business.

Instead of thoroughly checking content on their websites like news organizations, Internet platforms, such as Twitter, keep up unverified rumors, arguing that people deserve to “see and debate” controversial tweets by world leaders. However, this policy might be contributing to the infodemic, because <span data-tooltip="<a href='https://www.theguardian.com/media/2020/apr/08/influencers-being-key-distributors-of-coronavirus-fake-news?CMP=share_btn_tw.'>Read more about this study here</a>">studies</span> show that influencers with large followings spread misinformation about COVID-19 faster than it can be fact-checked.

  • Pace v. Baker-White (E.D. Pa. 2020) – The provider of software that helped Internet users filter unwanted content from their computers sued a competitor, alleging violations of New York state law and the Lanham Act's false advertising provision. The plaintiff alleged that the competitor configured its software to block users from accessing the provider's software, in order to divert the provider's customers. Section 230(c), the “Good Samaritan” provision, does not immunize blocking and filtering decisions driven by anticompetitive animus. Thus, the court found that the defendant did not enjoy CDA 230 immunity.
  • Zeran v. AOL (4th Cir. 1997) – A third-party posted the plaintiff’s phone number on AOL message boards and told others to contact the plaintiff for tasteless t-shirts/paraphanelia regarding the Oklahoma bombings. The plaintiff notified AOL that its website was hosting defamatory content. The court declined to assign liability to the website for hosting defamatory content.
  • Blumenthal v. Drudge (D.D.C 1998) – AOL paid Drudge to make a website available to AOL customers and published his defamatory article. The plaintiffs, who were defamed by Drudge’s article, sued both AOL for hosting the website and Drudge for publishing the content. The Court held that CDA 230 was meant to immunize web hosts from exactly this kind of liability since AOL had no hand in creating the offensive content.

Tort Claim

Tort Claim
Note: This post has been fabricated.

Hypothetical Case

  • Amazon is an online marketplace with over 353 million products. One such product is “silver solution,” sold by third-party manufacturer, COVIDCURES, Inc. Celebrities and politicians have been touting colloidal silver as a cure for COVID-19, despite the lack of scientific evidence.
  • Regulators, such as the Federal Trade Commission and Food & Drug Administration, have threatened to go after companies selling “scam” COVID-19 treatments.
  • Amazon’s customers reported COVIDCURES to Amazon for selling fake medicine.
  • The Federal Trade Commission sues COVIDCURES and Amazon for misleading marketing to sell <span data-tooltip='The Federal Trade Commission Act prohibits "unfair or deceptive act[s] or practice[s] in or affecting commerce." 15 U.S.C. § 45(a)(1) (1914)).<p><a href="https://www.ftc.gov/system/files/documents/public_statements/410531/831014deceptionstmt.pdf">Read the FTC Policy Statement on Deception</a>'>fake cures</span> for COVID-19.
  • Taylor, an individual worried about contracting COVID-19, buys COVIDCURES from Amazon. Because of colloidal silver’s side effects for patients taking medication for thyroid disorders, Taylor dies. Taylor’s family sues Amazon and COVIDCURES under <span data-tooltip='California Consumer Legal Remedies Act (Cal. Civil Code § 1750)<ul><li>Declares as unlawful "methods of competition and unfair or deceptive acts or practices undertaken by any person in a transaction intended to result or which results in the sale or lease of goods or services to any consumer"<li>Consumers can seek actual damages for violations of the Act</ul>'>California consumer protection law</span>.

/6

4

Question

If you were in charge at Amazon, what would you do about COVIDCURES?
So, what should the law say about this?

Your answer indicates that you support CDA 230 immunity for platforms.

You might think that Amazon is a massive platform and the company does not have the bandwidth to do quality control for every product sold by third parties on their website.

Here’s some background:

CDA 230 shields platforms from civil suits by all parties. This could include individuals who suffered from the website’s activities, as well as public authorities such as the Federal Trade Commission and state attorneys general, who otherwise possess the legal authority to enforce consumer protection laws. One question scholars and commentators have raised is: who should have the power to hold platforms legally responsible for the activity of users on their website—individuals, the government, both, or neither? Proponents of CDA 230 argue that if any individual harmed by a post can sue a website, platforms will “<span data-tooltip="Fair Housing Council v. Roomates.com (9th Cir. 2008), <a href='http://cdn.ca9.uscourts.gov/datastore/opinions/2008/04/02/0456916.pdf'>opinion available</a>.">face death by ten thousand duck-bites.</span>” It is simply too expensive for Internet companies to defend themselves against so many lawsuits. The cost of these suits might bankrupt an up-and-coming startup competitor, <span data-tooltip="For more on CDA 230’s competitive effects, see <a href='https://law.yale.edu/sites/default/files/area/center/isp/documents/new_controversies_in_intermediary_liability_law.pdf.'>Professor Eric Goldman’s essay</a>">entrenching big tech</span>. Opponents claim that CDA 230 immunity was for “publishers” and “speakers,” but that what online marketplaces, such as Amazon, do by <span data-tooltip="For more on this idea, see <a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3532691'>Professor Citron’s research</a>.">selling goods is far different from speech</span>.

  • Various state laws on unfair competition could hold platforms liable for failing to remove this sort of content. For example, the California Consumer Legal Remedies Act (Cal. Civil Code § 1750) declares as unlawful “methods of competition and unfair or deceptive acts or practices undertaken by any person in a transaction intended to result or which results in the sale or lease of goods or services to any consumer.” Consumers can seek actual damages for violations of the Act.
  • We don’t have the answer yet about what the law might say about this. Cases so far suggest that CDA 230 could shield platforms from immunity against FTC Act claims, but there are few on the issue. This could be a result of the FTC’s enforcement strategy; the agency may choose to use its resources to pursue claims that would not be presumptively blocked by CDA 230.
  • Federal Trade Commission v. LeadClick Media, LLC (2d Cir. 2016) – The FTC claimed that a manager of a network of online advertisers and its parent company had used deceptive practices to market weight loss products because affiliates of the company operated fake news sites to market the products of its client, LeanSpa, LLC. The defendant, found liable under the FTC Act, claimed that CDA 230 precluded liability against their company. The Second Circuit found that “other circuits . . . have recognized 230 immunity as broad,” citing to Almeida v. Amazon.com (11th Cir. 2006), which held that “[t]he majority of federal circuits have interpreted the CDA to establish broad federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.” The court assessed whether LeadClick met the criteria required to qualify for CDA 230 immunity. But it concluded that LeadClick did not qualify because it was an “information content provider” of deceptive content on the fake news sites.
  • Papataros v. Amazon.com, Inc. (D.N.J. 2019) – The plaintiff purchased a defective hoverboard from Amazon, which injured her son. Citing the precedent of Oberdorf v. Amazon (3rd Cir. 2019), the court held that CDA 230 protected Amazon from liability as a “speaker” and dismissed the plaintiff’s claims based on defective warnings or failure to warn. However, New Jersey law imposes strict liability on “sellers” of defective products.” The court ruled that Amazon could be liable as a “seller” for the defective product.
  • Gentry v. eBay, Inc. (Cal. Ct. App. 2004) – Plaintiffs purchased autographed sports memorabilia on eBay’s auction platform, but the memorabilia turned out to be forgeries. The sellers of these items had specifically planned to use eBay to sell fraudulent items. The court found that even though California state law regulates the sale of autographed sports memorabilia, these obligations could not be imposed on eBay due to CDA Section 230.

Your answer indicates that you believe CDA 230 should not always protect platforms.

You might believe platforms have an obligation to heed the warnings of public officials. If platforms face lawsuits for harmful products on their websites, they might voluntarily decide to clean up their websites more.

Here’s some background:

CDA 230 shields platforms from civil suits by all parties. This could include individuals who suffered from the website’s activities, as well as public authorities such as the Federal Trade Commission and state attorneys general, who otherwise possess the legal authority to enforce consumer protection laws. One question scholars and commentators have raised is: who should have the power to hold platforms legally responsible for the activity of users on their website—individuals, the government, both, or neither? Proponents of CDA 230 argue that if any individual harmed by a post can sue a website, platforms will “<span data-tooltip="Fair Housing Council v. Roomates.com (9th Cir. 2008), <a href='http://cdn.ca9.uscourts.gov/datastore/opinions/2008/04/02/0456916.pdf'>opinion available</a>.">face death by ten thousand duck-bites.</span>” It is simply too expensive for Internet companies to defend themselves against so many lawsuits. The cost of these suits might bankrupt an up-and-coming startup competitor, <span data-tooltip="For more on CDA 230’s competitive effects, see <a href='https://law.yale.edu/sites/default/files/area/center/isp/documents/new_controversies_in_intermediary_liability_law.pdf.'>Professor Eric Goldman’s essay</a>">entrenching big tech</span>. Opponents claim that CDA 230 immunity was for “publishers” and “speakers,” but that what online marketplaces, such as Amazon, do by <span data-tooltip="For more on this idea, see <a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3532691'>Professor Citron’s research</a>.">selling goods is far different from speech</span>.

  • Various state laws on unfair competition could hold platforms liable for failing to remove this sort of content. For example, the California Consumer Legal Remedies Act (Cal. Civil Code § 1750) declares as unlawful “methods of competition and unfair or deceptive acts or practices undertaken by any person in a transaction intended to result or which results in the sale or lease of goods or services to any consumer.” Consumers can seek actual damages for violations of the Act.
  • We don’t have the answer yet about what the law might say about this. Cases so far suggest that CDA 230 could shield platforms from immunity against FTC Act claims, but there are few on the issue. This could be a result of the FTC’s enforcement strategy; the agency may choose to use its resources to pursue claims that would not be presumptively blocked by CDA 230.
  • Federal Trade Commission v. LeadClick Media, LLC (2d Cir. 2016) – The FTC claimed that a manager of a network of online advertisers and its parent company had used deceptive practices to market weight loss products because affiliates of the company operated fake news sites to market the products of its client, LeanSpa, LLC. The defendant, found liable under the FTC Act, claimed that CDA 230 precluded liability against their company. The Second Circuit found that “other circuits . . . have recognized 230 immunity as broad,” citing to Almeida v. Amazon.com (11th Cir. 2006), which held that “[t]he majority of federal circuits have interpreted the CDA to establish broad federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.” The court assessed whether LeadClick met the criteria required to qualify for CDA 230 immunity. But it concluded that LeadClick did not qualify because it was an “information content provider” of deceptive content on the fake news sites.
  • Papataros v. Amazon.com, Inc. (D.N.J. 2019) – The plaintiff purchased a defective hoverboard from Amazon, which injured her son. Citing the precedent of Oberdorf v. Amazon (3rd Cir. 2019), the court held that CDA 230 protected Amazon from liability as a “speaker” and dismissed the plaintiff’s claims based on defective warnings or failure to warn. However, New Jersey law imposes strict liability on “sellers” of defective products.” The court ruled that Amazon could be liable as a “seller” for the defective product.
  • Gentry v. eBay, Inc. (Cal. Ct. App. 2004) – Plaintiffs purchased autographed sports memorabilia on eBay’s auction platform, but the memorabilia turned out to be forgeries. The sellers of these items had specifically planned to use eBay to sell fraudulent items. The court found that even though California state law regulates the sale of autographed sports memorabilia, these obligations could not be imposed on eBay due to CDA Section 230.

Your answer indicates that you oppose CDA 230 immunity for platforms.

You might believe that Amazon has an obligation to its customers to ensure no one is deceived by products it sells. You could think that platforms should only sell products that they can verify are safe.

Here’s some background:

CDA 230 shields platforms from civil suits by all parties. This could include individuals who suffered from the website’s activities, as well as public authorities such as the Federal Trade Commission and state attorneys general, who otherwise possess the legal authority to enforce consumer protection laws. One question scholars and commentators have raised is: who should have the power to hold platforms legally responsible for the activity of users on their website—individuals, the government, both, or neither? Proponents of CDA 230 argue that if any individual harmed by a post can sue a website, platforms will “<span data-tooltip="Fair Housing Council v. Roomates.com (9th Cir. 2008), <a href='http://cdn.ca9.uscourts.gov/datastore/opinions/2008/04/02/0456916.pdf'>opinion available</a>.">face death by ten thousand duck-bites.</span>” It is simply too expensive for Internet companies to defend themselves against so many lawsuits. The cost of these suits might bankrupt an up-and-coming startup competitor, <span data-tooltip="For more on CDA 230’s competitive effects, see <a href='https://law.yale.edu/sites/default/files/area/center/isp/documents/new_controversies_in_intermediary_liability_law.pdf.'>Professor Eric Goldman’s essay</a>">entrenching big tech</span>. Opponents claim that CDA 230 immunity was for “publishers” and “speakers,” but that what online marketplaces, such as Amazon, do by <span data-tooltip="For more on this idea, see <a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3532691'>Professor Citron’s research</a>.">selling goods is far different from speech</span>.

  • Various state laws on unfair competition could hold platforms liable for failing to remove this sort of content. For example, the California Consumer Legal Remedies Act (Cal. Civil Code § 1750) declares as unlawful “methods of competition and unfair or deceptive acts or practices undertaken by any person in a transaction intended to result or which results in the sale or lease of goods or services to any consumer.” Consumers can seek actual damages for violations of the Act.
  • We don’t have the answer yet about what the law might say about this. Cases so far suggest that CDA 230 could shield platforms from immunity against FTC Act claims, but there are few on the issue. This could be a result of the FTC’s enforcement strategy; the agency may choose to use its resources to pursue claims that would not be presumptively blocked by CDA 230.
  • Federal Trade Commission v. LeadClick Media, LLC (2d Cir. 2016) – The FTC claimed that a manager of a network of online advertisers and its parent company had used deceptive practices to market weight loss products because affiliates of the company operated fake news sites to market the products of its client, LeanSpa, LLC. The defendant, found liable under the FTC Act, claimed that CDA 230 precluded liability against their company. The Second Circuit found that “other circuits . . . have recognized 230 immunity as broad,” citing to Almeida v. Amazon.com (11th Cir. 2006), which held that “[t]he majority of federal circuits have interpreted the CDA to establish broad federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.” The court assessed whether LeadClick met the criteria required to qualify for CDA 230 immunity. But it concluded that LeadClick did not qualify because it was an “information content provider” of deceptive content on the fake news sites.
  • Papataros v. Amazon.com, Inc. (D.N.J. 2019) – The plaintiff purchased a defective hoverboard from Amazon, which injured her son. Citing the precedent of Oberdorf v. Amazon (3rd Cir. 2019), the court held that CDA 230 protected Amazon from liability as a “speaker” and dismissed the plaintiff’s claims based on defective warnings or failure to warn. However, New Jersey law imposes strict liability on “sellers” of defective products.” The court ruled that Amazon could be liable as a “seller” for the defective product.
  • Gentry v. eBay, Inc. (Cal. Ct. App. 2004) – Plaintiffs purchased autographed sports memorabilia on eBay’s auction platform, but the memorabilia turned out to be forgeries. The sellers of these items had specifically planned to use eBay to sell fraudulent items. The court found that even though California state law regulates the sale of autographed sports memorabilia, these obligations could not be imposed on eBay due to CDA Section 230.

Voter Suppression

Voter Suppression
Note: This post has been fabricated.

Hypothetical Case

  • States faced the choice of whether to delay their presidential primaries in light of COVID-19.
  • Russian agents want to suppress turnout in one of those states. They launch a disinformation campaign through the Twitter handle “ArizonaMediaCO.” They maliciously tell voters that Arizona’s primary has been delayed, even though the Governor decided to go ahead with the election during the pandemic.
  • An Arizona individual reports the misinformation to Twitter.
  • Twitter does not take down the tweet in a timely fashion.
  • Several Arizona voters see ArizonaMediaCO’s tweet on their timeline and stay at home instead of voting.
  • Voters who lost the chance to cast a ballot for their favored nominee sue Twitter under Arizona <span data-tooltip='<ol type="A"><li>It is unlawful for a person knowingly by force, threats, menaces, bribery or any corrupt means, either directly or indirectly:<ol><li>To attempt to influence an elector in casting his vote or to deter him from casting his vote.<li>To attempt to awe, restrain, hinder or disturb an elector in the free exercise of the right of suffrage.<li>To defraud an elector by deceiving and causing him to vote for a different person for an office or for a different measure than he intended or desired to vote for.</ol><li>A person who violates any provision of this section is guilty of a class 5 felony.</ol><br>Ariz. Rev. Stat. Ann. § 16-1006'>election law</span>.

/6

5

Question

If you were in charge at Twitter, what would you do about ArizonaMediaCo’s tweet?
So, what should the law say about this?

Your answer indicates that you support CDA 230 immunity for platforms.

You might believe that Twitter has millions of users and can’t investigate every complaint. You could think that platforms should enable individuals to access all kinds of information or that users have the right to say what they want online.

Here’s some background:

We’re actually not sure whether this hypothetical voter lawsuit would even succeed. The only <span data-tooltip="<a href=https://law.justia.com/cases/federal/district-courts/arizona/azdce/2:2010cv01902/549648/29/''>See Arizona Green Party v. Bennett (D. Ariz. 2010)</a>">case</span> to discuss this provision in Arizona’s election law was not successful. What this tells us is that sometimes, even if CDA 230 immunity does not protect platforms from lawsuits, platforms might still not change their behavior because other laws do not require them to do so.

Laws can be used as a sword or a shield. CDA 230 is a strong shield against many types of lawsuits, but in order to prevail in court, parties also need a strong sword. The Arizona voter suppression law in this case may be one example of a weak "sword." This is why previous reforms to CDA 230, such as SESTA (“Stop Enabling Sex Traffickers Act”) and FOSTA (“Fight Online Sex Trafficking Act”) created an exception to CDA 230 immunity and amended other laws to make it easier to bring lawsuits to fight child pornography and sex trafficking issues.

  • See The Lawyers’ Committee for Civil Rights, Deceptive Election Practice and Voter Intimidation (2012), <a href="https://lawyerscommittee.org/wp-content/uploads/2015/07/DeceptivePracticesReportJuly2012FINALpdf.pdf.">available on the Lawyers' Committee website</a>.
  • FOSTA was intended to strengthen legal claims against individuals participating in online sex trafficking. Some scholars have argued that it has not achieved these goals . Emily J. Born writes, “Not only is [FOSTA] creating issues for sex workers and free speech, but it is not even accomplishing its law enforcement objective because it does not necessarily create liability for the type of behavior it was enacted in response to.” Emily J. Born, Too Far and Not Far Enough: Understanding the Impact of FOSTA, 94 NYU Law Review 1623 (2019).

Your answer indicates that you think CDA 230 immunity should not protect platforms in every instance.

You might believe that CDA 230 should not encourage platforms to flout election law. Because Arizona has a law against voter suppression, voters should be able to hold Twitter accountable if it does not comply.

Here’s some background:

We’re actually not sure whether this hypothetical voter lawsuit would even succeed. The only <span data-tooltip="<a href=https://law.justia.com/cases/federal/district-courts/arizona/azdce/2:2010cv01902/549648/29/''>See Arizona Green Party v. Bennett (D. Ariz. 2010)</a>">case</span> to discuss this provision in Arizona’s election law was not successful. What this tells us is that sometimes, even if CDA 230 immunity does not protect platforms from lawsuits, platforms might still not change their behavior because other laws do not require them to do so.

Laws can be used as a sword or a shield. CDA 230 is a strong shield against many types of lawsuits, but in order to prevail in court, parties also need a strong sword. The Arizona voter suppression law in this case may be one example of a weak "sword." This is why previous reforms to CDA 230, such as SESTA (“Stop Enabling Sex Traffickers Act”) and FOSTA (“Fight Online Sex Trafficking Act”) created an exception to CDA 230 immunity and amended other laws to make it easier to bring lawsuits to fight child pornography and sex trafficking issues.

  • See The Lawyers’ Committee for Civil Rights, Deceptive Election Practice and Voter Intimidation (2012), <a href="https://lawyerscommittee.org/wp-content/uploads/2015/07/DeceptivePracticesReportJuly2012FINALpdf.pdf.">available on the Lawyers' Committee website</a>.
  • FOSTA was intended to strengthen legal claims against individuals participating in online sex trafficking. Some scholars have argued that it has not achieved these goals . Emily J. Born writes, “Not only is [FOSTA] creating issues for sex workers and free speech, but it is not even accomplishing its law enforcement objective because it does not necessarily create liability for the type of behavior it was enacted in response to.” Emily J. Born, Too Far and Not Far Enough: Understanding the Impact of FOSTA, 94 NYU Law Review 1623 (2019).

Your answer indicates that you oppose CDA 230 immunity for platforms.

You might believe that platforms should ensure users are not misled by content posted on their website. You could think that Twitter has a special obligation to monitor its website for disinformation that could influence an election.

Here’s some background:

We’re actually not sure whether this hypothetical voter lawsuit would even succeed. The only <span data-tooltip="<a href=https://law.justia.com/cases/federal/district-courts/arizona/azdce/2:2010cv01902/549648/29/''>See Arizona Green Party v. Bennett (D. Ariz. 2010)</a>">case</span> to discuss this provision in Arizona’s election law was not successful. What this tells us is that sometimes, even if CDA 230 immunity does not protect platforms from lawsuits, platforms might still not change their behavior because other laws do not require them to do so.

Laws can be used as a sword or a shield. CDA 230 is a strong shield against many types of lawsuits, but in order to prevail in court, parties also need a strong sword. The Arizona voter suppression law in this case may be one example of a weak "sword." This is why previous reforms to CDA 230, such as SESTA (“Stop Enabling Sex Traffickers Act”) and FOSTA (“Fight Online Sex Trafficking Act”) created an exception to CDA 230 immunity and amended other laws to make it easier to bring lawsuits to fight child pornography and sex trafficking issues.

  • See The Lawyers’ Committee for Civil Rights, Deceptive Election Practice and Voter Intimidation (2012), <a href="https://lawyerscommittee.org/wp-content/uploads/2015/07/DeceptivePracticesReportJuly2012FINALpdf.pdf.">available on the Lawyers' Committee website</a>.
  • FOSTA was intended to strengthen legal claims against individuals participating in online sex trafficking. Some scholars have argued that it has not achieved these goals . Emily J. Born writes, “Not only is [FOSTA] creating issues for sex workers and free speech, but it is not even accomplishing its law enforcement objective because it does not necessarily create liability for the type of behavior it was enacted in response to.” Emily J. Born, Too Far and Not Far Enough: Understanding the Impact of FOSTA, 94 NYU Law Review 1623 (2019).

Third-party misinformation and encryption

Third-party misinformation and encryption
Note: This post has been fabricated.

Hypothetical Case

  • A malicious user pretended to be the Singaporean government on WhatsApp and sent individuals false information about COVID-19. The users even put “green check marks” in the account names to trick individuals into thinking the account was verified.
  • Individuals who received the message notified WhatsApp that this account was spreading fake news about the Singaporean government’s response to COVID-19.
  • WhatsApp did not clamp down on messages pretending to be sent by the government. WhatsApp reiterated that <span data-tooltip="<a href='https://www.wsj.com/articles/facebooks-whatsapp-battles-coronavirus-misinformation-11586256870'>See the measures Whatsapp takes to fight misinformation here</a>">it is not technologically possible to monitor the content of messages on its platform because they are encrypted end-to-end</span>.
  • The Singaporean government sues WhatsApp (in the United States) to disable the impersonating accounts.


/6

6

Question

If you were in charge at Whatsapp, what would you do?
So, what should the law say about this?

Your answer indicates that you support CDA 230 immunity for platforms.

You might believe that Whatsapp should not violate its promise to customers of an end-to-end encrypted communications service. You could think that Whatsapp might not be able to monitor the billions of messages it delivers per day.

Here’s some background:

The law currently <span data-tooltip='<a href="https://www.lawfareblog.com/herrick-v-grindr-why-section-230-communications-decency-act-must-be-fixed">See Carrie Goldberg’s ideas on Herrick v. Grindr</a>'>does not penalize platforms for defective design</span>. This means that if a platform makes certain design choices, such as end-to-end encryption, and that feature causes harm, the platform is not responsible. Lawmakers are currently debating whether CDA 230 immunity should preclude lawsuits against platforms for content posted in encrypted channels. Proposals to reform to CDA 230 would grant immunity only if platforms take action to protect individuals from harmful content. Privacy advocates support CDA 230, as it exists, because encouraging platforms to take a more active role in supervising content that would otherwise receive little attention would <span data-tooltip='<a href="https://signal.org/blog/earn-it/">Learn why privacy advocates oppose the EARN IT Act</a>'>eliminate encrypted channels</span>. This could increase government surveillance of private communications.

  • Herrick v. Grindr (2d Cir. 2019) – Plaintiff sued Grindr, a “hook-up” app, which his ex-boyfriend used to launch a harassment campaign, by creating fake profiles in the plaintiff’s name and directing users to contact the plaintiff at his home and work. Plaintiff sued Grindr for negligence (arguing that the app was defectively designed by lacking safety features to prevent this type of dangerous conduct), deceptive business practices, false advertising, and negligent misrepresentation. The court dismissed under CDA 230(c) because it confers “broad federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.”
  • See also Ben Ezra Weinstein & Co. v. AOL (10th Cir. 2000) – The plaintiff, a computer software company, sued AOL for defamation, claiming that AOL published incorrect stock quote information (based on third-party providers). The court dismissed the plaintiff’s claim under CDA 230 because AOL did not create or develop any of the content such that it became an information content provider
  • Is there a difference between leaving messages unencrypted versus encrypted messages but having the platform hold a key that can unencrypt them? These two are functionally the same, as explained in this EFF article <a href="https://www.eff.org/deeplinks/2019/09/carnegie-experts-should-know-defending-encryption-isnt-absolutist-position">“Defending Encryption Isn't an ‘Absolutist’ Position”</a>.

Your answer indicates that you think CDA 230 immunity should be revoked for platforms in some cases.

You could believe that Whatsapp should try to preserve individuals’ privacy and that Whatsapp should be allowed to encrypt messages. That said, you might want it to still be accountable when those messages cause harm.

Here’s some background:

The law currently <span data-tooltip='<a href="https://www.lawfareblog.com/herrick-v-grindr-why-section-230-communications-decency-act-must-be-fixed">See Carrie Goldberg’s ideas on Herrick v. Grindr</a>'>does not penalize platforms for defective design</span>. This means that if a platform makes certain design choices, such as end-to-end encryption, and that feature causes harm, the platform is not responsible. Lawmakers are currently debating whether CDA 230 immunity should preclude lawsuits against platforms for content posted in encrypted channels. Proposals to reform to CDA 230 would grant immunity only if platforms take action to protect individuals from harmful content. Privacy advocates support CDA 230, as it exists, because encouraging platforms to take a more active role in supervising content that would otherwise receive little attention would <span data-tooltip='<a href="https://signal.org/blog/earn-it/">Learn why privacy advocates oppose the EARN IT Act</a>'>eliminate encrypted channels</span>. This could increase government surveillance of private communications.

  • Herrick v. Grindr (2d Cir. 2019) – Plaintiff sued Grindr, a “hook-up” app, which his ex-boyfriend used to launch a harassment campaign, by creating fake profiles in the plaintiff’s name and directing users to contact the plaintiff at his home and work. Plaintiff sued Grindr for negligence (arguing that the app was defectively designed by lacking safety features to prevent this type of dangerous conduct), deceptive business practices, false advertising, and negligent misrepresentation. The court dismissed under CDA 230(c) because it confers “broad federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.”
  • See also Ben Ezra Weinstein & Co. v. AOL (10th Cir. 2000) – The plaintiff, a computer software company, sued AOL for defamation, claiming that AOL published incorrect stock quote information (based on third-party providers). The court dismissed the plaintiff’s claim under CDA 230 because AOL did not create or develop any of the content such that it became an information content provider
  • Is there a difference between leaving messages unencrypted versus encrypted messages but having the platform hold a key that can unencrypt them? These two are functionally the same, as explained in this EFF article <a href="https://www.eff.org/deeplinks/2019/09/carnegie-experts-should-know-defending-encryption-isnt-absolutist-position">“Defending Encryption Isn't an ‘Absolutist’ Position”</a>.

Your answer indicates that you oppose CDA 230 immunity for platforms.

You might think that Whatsapp should actively monitor messages related to the public health crisis. You could believe that preserving users’ privacy is less important than protecting individuals from health misinformation.

Here’s some background:

The law currently <span data-tooltip='<a href="https://www.lawfareblog.com/herrick-v-grindr-why-section-230-communications-decency-act-must-be-fixed">See Carrie Goldberg’s ideas on Herrick v. Grindr</a>'>does not penalize platforms for defective design</span>. This means that if a platform makes certain design choices, such as end-to-end encryption, and that feature causes harm, the platform is not responsible. Lawmakers are currently debating whether CDA 230 immunity should preclude lawsuits against platforms for content posted in encrypted channels. Proposals to reform to CDA 230 would grant immunity only if platforms take action to protect individuals from harmful content. Privacy advocates support CDA 230, as it exists, because encouraging platforms to take a more active role in supervising content that would otherwise receive little attention would <span data-tooltip='<a href="https://signal.org/blog/earn-it/">Learn why privacy advocates oppose the EARN IT Act</a>'>eliminate encrypted channels</span>. This could increase government surveillance of private communications.

  • Herrick v. Grindr (2d Cir. 2019) – Plaintiff sued Grindr, a “hook-up” app, which his ex-boyfriend used to launch a harassment campaign, by creating fake profiles in the plaintiff’s name and directing users to contact the plaintiff at his home and work. Plaintiff sued Grindr for negligence (arguing that the app was defectively designed by lacking safety features to prevent this type of dangerous conduct), deceptive business practices, false advertising, and negligent misrepresentation. The court dismissed under CDA 230(c) because it confers “broad federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.”
  • See also Ben Ezra Weinstein & Co. v. AOL (10th Cir. 2000) – The plaintiff, a computer software company, sued AOL for defamation, claiming that AOL published incorrect stock quote information (based on third-party providers). The court dismissed the plaintiff’s claim under CDA 230 because AOL did not create or develop any of the content such that it became an information content provider
  • Is there a difference between leaving messages unencrypted versus encrypted messages but having the platform hold a key that can unencrypt them? These two are functionally the same, as explained in this EFF article <a href="https://www.eff.org/deeplinks/2019/09/carnegie-experts-should-know-defending-encryption-isnt-absolutist-position">“Defending Encryption Isn't an ‘Absolutist’ Position”</a>.

Let's see your scorecard. How'd you do?

You're a CDA 230 Absolutist

You agree with CDA 230 as it stands. Many scholars agree that strong immunity for platforms is necessary for the Internet to flourish and innovate as it has since CDA was passed. As the Internet becomes our primary medium for the exchange of ideas, first amendment advocates worry that without CDA 230,  platforms will censor more words and ideas expressed online.

You’re a CDA 230 Reformer

You might agree with CDA, but think courts have interpreted too expansively. There are a wide variety of proposals for reforming CDA 230. Maybe platforms should bear some responsibility for spreading harmful misinformation, at least when it is brought to their attention and they fail to act. However, you might instead think that platforms censor content selectively, and CDA 230 should require them to protect more free speech on the Internet. We encourage you to learn more about the variety of academic and legislative proposals to amend CDA 230.

You’re a CDA 230 Abolitionist

You think CDA 230 shields platforms too strongly, preventing them from addressing the real social harms that result from misinformation shared by users. Protecting the free speech of platforms in and of itself was not the original intent of CDA. Rather, it was to encourage proactive moderation by platforms. Since then, many courts have misconstrued the “Good Samaritan” law as granting platforms blanket immunity. Whether it is politics or public health, platforms play a vital role in modern life. They should take a more active role in ensuring their users’ well-being.

About WTF is CDA

WTF is CDA is a Berkman Klein Center Assembly Student Fellowship project by Sanjana Parikh, Sam Clay, Madeline Salinas, Jess Eng, and Sahar Kazranian. We'd like to thank Jenny Fan for designing the WTF is CDA website. We are immensely grateful for the advice and support from Hilary Ross, Zenzele Best, Jonathan Zittrain, Danielle Citron, Tejas Narechania, Jeff Kosseff, Oumou Ly, and the 2020 Assembly Student Fellowship Cohort.

WTF is CDA inherits the Berkman Klein Center privacy policy.

Meet the team

Sanjana Parikh
Samuel Clay
Madeline Salinas
Jess Eng
Sahar Kazranian

Supported by