Is U.S. law responsible for disinformation and misinformation posted on social media?
Find out what you think.

Recently, the Internet has been suffering from an “infodemic,” a crisis of misinformation and disinformation spreading faster than the COVID-19 pandemic. How should social media platforms deal with disinformation shared on their websites? Should the law require them to do anything? One particular law, Section 230 of the Communications Decency Act (“CDA 230”), affects the answer to this question.

This interactive tool will allow you to explore how the debate around CDA 230 relates to the dissemination of misinformation and disinformation today. Our quiz will help you develop an opinion about how CDA 230 impacts the moderation behavior and decisions of platforms.

Last updated: July 7th, 2020

For a more comprehensive view of CDA 230, check out this explainer or this guide.

This website is a project from the Assembly Student Fellowship of the Berkman Klein Center for Internet & Society at Harvard University. This site is currently evolving, and we are actively seeking feedback--please submit any comments here. This stuff is complicated, and involves precise language and subtle concepts -- we're working to make it both as accurate and accessible as can be as we hone our own understandings. And of course this should not be construed as legal advice.

Note: All of the posts used in this tool have been fabricated for educational purposes and should be considered within the context of answering only these questions. The user identities and handles are fictional and the posts’ content are not based on, nor intended to imply, any real-world developments
.

Defamation by high-profile individuals

Defamation by high-profile individuals
Note: This post has been fabricated.

Hypothetical Case

  • During the pandemic, <span data-tooltip="<a href='https://www.nytimes.com/2020/04/01/us/politics/coronavirus-fauci-security.html'>Read more about actual threats to Dr. Fauci.</a>">a cybermob sought to discredit Dr. Anthony Fauci</span>, the United States’s leading infectious disease expert on President Trump’s COVID-19 task force.
  • The cybermob spread rumors that Dr. Fauci was having a same sex affair in the workplace. Senator Smith, an elected member of Congress, amplified these rumors via his Twitter account, and his tweet was retweeted 50 times.
  • Twitter users reported that the tweets about Dr. Fauci’s affair were fake, meaning the tweets were defamatory.
  • Twitter refused to take down Senator Smith’s tweet because of <span data-tooltip='<a href="https://blog.twitter.com/en_us/topics/compaworlny/2018/world-leaders-and-twitter.html">See more about Twitter’s guidelines</a>'>its guidelines on tweets by world leaders</span>. Twitter thinks that this information is newsworthy to Senator Smith’s constituents.
  • After two months passed, Dr. Fauci sued Twitter and Senator Smith for defamation.

/6

1

Question

If you were in charge at Twitter, what would you do about Senator Smith’s tweet?
So, what should the law say about this?

Your answer indicates that you support CDA 230 immunity for platforms.

Platforms should be encouraged to craft their own content moderation policies. CDA 230 immunity allows them to do this by protecting them from individuals’ lawsuits when they get it wrong.

Here’s some background:

Before CDA 230, Internet <span  data-tooltip="All-encompassing for website, information intermediary, etc.">platforms</span>, such as Twitter in the example above, faced a difficult tradeoff when deciding whether to remove questionable posts by <span data-tooltip="Speaker or poster of content on a platform, individual or organization">users</span>. If the platform removed some undesirable content posted by their users on their website, the law viewed the website as a "publisher" and held them liable for content, like Senator Smith’s defamatory Tweet, that remained on the website. The drafters of CDA 230 thought making platforms publishers created the wrong incentives because it discouraged platforms from cleaning up their websites.

CDA 230 intended to realign incentives by immunizing platforms from lawsuits even if they voluntarily took actions to remove objectionable content, or if objectionable posts remained up that they could not get around to. They intended to offer this protection to “Good Samaritans,” platforms who screened and removed objectionable content in good faith. While this protection offers broad legal immunity to platforms, it also has several important exceptions: for violations of intellectual property law, federal crimes, violations of the Fight Online Sex Trafficking Act (FOSTA) and Stop Enabling Sex Traffickers Act (SESTA), and violations of the Electronic Communications Privacy Act (ECPA).

Note that CDA 230 also protects any Twitter users who retweeted Senator Smith’s tweet. If Dr. Fauci sued those users, these lawsuits would also be rendered unsuccessful by CDA 230, because it protects not only platforms—<span data-tooltip="<a href='https://www.law.cornell.edu/uscode/text/47/230'>47 U.S.C. §230(c)(1)</a>">but also users of the platform</span>—from liability for information shared by other users of the website.

  • Before CDA 230 was enacted, the case governing platform liability was Stratton Oakmont Inc. v. Prodigy Services Co. (N.Y. Sup. Ct. 1995). The plaintiff, a securities firm, sued Prodigy, an early online service provider, which hosted a financial bulletin board. One of Prodigy’s users had written some defamatory comments on the online board about the plaintiffs’ firm. Prodigy used a software to filter out profanity on its bulletin boards, but it did not catch the defamatory content. The firm sued Prodigy for defamation. The trial court reasoned that because Prodigy filtered some messages on its board, it was a publisher of the defamatory content, and not merely a distributor, or passive conduit. Legislators responded to this decision with Section 230 of the Communications Decency Act. They worried that online service providers would respond to the Prodigy decision by not filtering out profanity—or worse, pornography—at all.
  • Zeran v. AOL (4th Cir. 1997) – A third-party posted the plaintiff’s phone number on AOL message boards and told others to contact the plaintiff for tasteless t-shirts/paraphanelia regarding the Oklahoma bombings. The plaintiff notified AOL that its website was hosting defamatory content. The court declined to assign liability to the website for hosting defamatory content.
  • Blumenthal v. Drudge (D.D.C 1998) – AOL paid Drudge to make a website available to AOL customers and published his defamatory article. The plaintiffs, who were defamed by Drudge’s article, sued both AOL for hosting the website and Drudge for publishing the content. The Court held that CDA 230 was meant to immunize web hosts from exactly this kind of liability since AOL had no hand in creating the offensive content.

Your answer indicates that you believe CDA 230 immunity should sometimes protect platforms.

Platforms should be encouraged to craft their own content moderation policies. Sometimes, they may leave up defamatory posts to serve other policy objectives, like supporting the free speech of public figures.

Here’s some background:

Before CDA 230, Internet <span  data-tooltip="All-encompassing for website, information intermediary, etc.">platforms</span>, such as Twitter in the example above, faced a difficult tradeoff when deciding whether to remove questionable posts by <span data-tooltip="Speaker or poster of content on a platform, individual or organization">users</span>. If the platform removed some undesirable content posted by their users on their website, the law viewed the website as a "publisher" and held them liable for content, like Senator Smith’s defamatory Tweet, that remained on the website. The drafters of CDA 230 thought making platforms publishers created the wrong incentives because it discouraged platforms from cleaning up their websites.

CDA 230 intended to realign incentives by immunizing platforms from lawsuits even if they voluntarily took actions to remove objectionable content, or if objectionable posts remained up that they could not get around to. They intended to offer this protection to “Good Samaritans,” platforms who screened and removed objectionable content in good faith. While this protection offers broad legal immunity to platforms, it also has several important exceptions: for violations of intellectual property law, federal crimes, violations of the Fight Online Sex Trafficking Act (FOSTA) and Stop Enabling Sex Traffickers Act (SESTA), and violations of the Electronic Communications Privacy Act (ECPA).

Note that CDA 230 also protects any Twitter users who retweeted Senator Smith’s tweet. If Dr. Fauci sued those users, these lawsuits would also be rendered unsuccessful by CDA 230, because it protects not only platforms—<span data-tooltip="<a href='https://www.law.cornell.edu/uscode/text/47/230'>47 U.S.C. §230(c)(1)</a>">but also users of the platform</span>—from liability for information shared by other users of the website.

  • Before CDA 230 was enacted, the case governing platform liability was Stratton Oakmont Inc. v. Prodigy Services Co. (N.Y. Sup. Ct. 1995). The plaintiff, a securities firm, sued Prodigy, an early online service provider, which hosted a financial bulletin board. One of Prodigy’s users had written some defamatory comments on the online board about the plaintiffs’ firm. Prodigy used a software to filter out profanity on its bulletin boards, but it did not catch the defamatory content. The firm sued Prodigy for defamation. The trial court reasoned that because Prodigy filtered some messages on its board, it was a publisher of the defamatory content, and not merely a distributor, or passive conduit. Legislators responded to this decision with Section 230 of the Communications Decency Act. They worried that online service providers would respond to the Prodigy decision by not filtering out profanity—or worse, pornography—at all.
  • Zeran v. AOL (4th Cir. 1997) – A third-party posted the plaintiff’s phone number on AOL message boards and told others to contact the plaintiff for tasteless t-shirts/paraphanelia regarding the Oklahoma bombings. The plaintiff notified AOL that its website was hosting defamatory content. The court declined to assign liability to the website for hosting defamatory content.
  • Blumenthal v. Drudge (D.D.C 1998) – AOL paid Drudge to make a website available to AOL customers and published his defamatory article. The plaintiffs, who were defamed by Drudge’s article, sued both AOL for hosting the website and Drudge for publishing the content. The Court held that CDA 230 was meant to immunize web hosts from exactly this kind of liability since AOL had no hand in creating the offensive content.

Your answer indicates that you oppose CDA 230 immunity for platforms.

According to this view, platforms should affirmatively monitor their websites for harmful content. You might believe that Twitter has an obligation to individuals to fact-check the information that it spreads.

Here’s some background:

Before CDA 230, Internet <span  data-tooltip="All-encompassing for website, information intermediary, etc.">platforms</span>, such as Twitter in the example above, faced a difficult tradeoff when deciding whether to remove questionable posts by <span data-tooltip="Speaker or poster of content on a platform, individual or organization">users</span>. If the platform removed some undesirable content posted by their users on their website, the law viewed the website as a "publisher" and held them liable for content, like Senator Smith’s defamatory Tweet, that remained on the website. The drafters of CDA 230 thought making platforms publishers created the wrong incentives because it discouraged platforms from cleaning up their websites.

CDA 230 intended to realign incentives by immunizing platforms from lawsuits even if they voluntarily took actions to remove objectionable content, or if objectionable posts remained up that they could not get around to. They intended to offer this protection to “Good Samaritans,” platforms who screened and removed objectionable content in good faith. While this protection offers broad legal immunity to platforms, it also has several important exceptions: for violations of intellectual property law, federal crimes, violations of the Fight Online Sex Trafficking Act (FOSTA) and Stop Enabling Sex Traffickers Act (SESTA), and violations of the Electronic Communications Privacy Act (ECPA).

Note that CDA 230 also protects any Twitter users who retweeted Senator Smith’s tweet. If Dr. Fauci sued those users, these lawsuits would also be rendered unsuccessful by CDA 230, because it protects not only platforms—<span data-tooltip="<a href='https://www.law.cornell.edu/uscode/text/47/230'>47 U.S.C. §230(c)(1)</a>">but also users of the platform</span>—from liability for information shared by other users of the website.

  • Before CDA 230 was enacted, the case governing platform liability was Stratton Oakmont Inc. v. Prodigy Services Co. (N.Y. Sup. Ct. 1995). The plaintiff, a securities firm, sued Prodigy, an early online service provider, which hosted a financial bulletin board. One of Prodigy’s users had written some defamatory comments on the online board about the plaintiffs’ firm. Prodigy used a software to filter out profanity on its bulletin boards, but it did not catch the defamatory content. The firm sued Prodigy for defamation. The trial court reasoned that because Prodigy filtered some messages on its board, it was a publisher of the defamatory content, and not merely a distributor, or passive conduit. Legislators responded to this decision with Section 230 of the Communications Decency Act. They worried that online service providers would respond to the Prodigy decision by not filtering out profanity—or worse, pornography—at all.
  • Zeran v. AOL (4th Cir. 1997) – A third-party posted the plaintiff’s phone number on AOL message boards and told others to contact the plaintiff for tasteless t-shirts/paraphanelia regarding the Oklahoma bombings. The plaintiff notified AOL that its website was hosting defamatory content. The court declined to assign liability to the website for hosting defamatory content.
  • Blumenthal v. Drudge (D.D.C 1998) – AOL paid Drudge to make a website available to AOL customers and published his defamatory article. The plaintiffs, who were defamed by Drudge’s article, sued both AOL for hosting the website and Drudge for publishing the content. The Court held that CDA 230 was meant to immunize web hosts from exactly this kind of liability since AOL had no hand in creating the offensive content.

In the previous example, we asked you to consider what should happen when a platform doesn’t remove a harmful tweet.

In the next example, consider what happens when a platform does take action.

Moderation as censorship?

Moderation as censorship?
Note: This post has been fabricated.

Hypothetical Case

  • The Truth Tribune, a conservative online magazine, tweeted a link to its 500,000 followers of this article by an unlicensed physician promoting <span data-tooltip="Coronavirus parties are gatherings of individuals who intend to intentionally infect themselves with COVID-19.">coronavirus parties</span>.
  • Public health authorities, including the Center for Disease Control have <span data-tooltip="<a href='https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/social-distancing.html'>See more at the CDC</a>">issued guidance</span> that such social gatherings are dangerous.
  • Twitter took down the tweet because it violated their <span data-tooltip="<a href='https://blog.twitter.com/en_us/topics/company/2020/An-update-on-our-continuity-strategy-during-COVID-19.html'>See more about Twitter’s misinformation policies</a>">misinformation policies</span>.
  • The Truth Tribune sues Twitter for censorship based on political views.

/6

2

Question

If you were in charge at Twitter, what would you do about this tweet?
So, what should the law say about this?

Your answer indicates that you tend to support CDA 230 immunity for platforms; platforms should be free to moderate the content on their websites without being afraid of legal repercussions.

CDA 230, as well as the First Amendment, currently protect websites from lawsuits by users when their posts are removed from their websites.


Here’s some background:

A website’s decision to remove content posted by users on its platform is not only protected by CDA 230 but also by the First Amendment. The First Amendment prohibits state actors from infringing private parties’ freedom of religion, speech, and assembly, and under current law, it affords private social media companies the right to control the “speech” shared on their websites. Under the First Amendment, the law can neither censor Twitter’s speech, nor can it compel Twitter to host speech with which it disagrees. Some argue, however, that popular websites should, like state actors, be subject to the requirements of the First Amendment because they are so ubiquitous that they serve a “public function.” <span data-tooltip="Read the <a href='https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online-censorship/'>Executive Order on Preventing Online Censorship</a>">Recent proposals</span> for reform have called into question what constitutes “good faith” content moderation and attempted to restore liability for platforms that edit content in a manner that demonstrates political bias. But, so far, courts have rejected these interpretations of the law.

As a note, our hypothetical cases do not take into consideration liability for the <span data-tooltip="Read the New America <a href='https://www.newamerica.org/oti/reports/how-internet-platforms-are-combating-disinformation-and-misinformation-age-covid-19/'>report</a>">various content moderation practices</span> used by social media platforms today. Platforms in many cases today seek to influence the impact of posts by, for example, labeling them as disinformation, fact-checking them, or downranking their place in the news feed.

  • Prager University v. Google (9th Cir. 2020) – Plaintiff was a nonprofit educational and media organization that created short educational videos and shared them on the Internet. Plaintiff sued YouTube for demonetizing and placing restrictions on the plaintiff’s content, claiming violations of the First Amendment, false advertising under the Lanham Act, and other state laws. The Ninth Circuit upheld the dismissal of the plaintiff’s First Amendment claim, reasoning that the First Amendment did not apply to the defendant because it is a private entity, not a state actor. State actors include federal, state, and local governments as well as some private actors who perform “public functions” such as running an election, operating a company town “and not much else”–YouTube did not meet this demanding criteria. The Ninth Circuit also upheld the dismissal of the plaintiff’s claim under the Lanham Act. It found that none of YouTube’s statements were actionable because they did not constitute a “false or misleading representation of fact in commercial advertising or promotion that misrepresents the nature, characteristics, qualities, or geographic origin of his or her or another person’s goods, services, or commercial activities.”
  • See also Domen v. Vimeo (S.D.N.Y. 2020), Federal Agency of News LLC v. Facebook (2020 WL 137154) (N.D. Cal. 2020); Sikhs for Justice v. Facebook (144 F. Supp. 3d 1088) (N.D. Cal. 2015).

Your answer indicates that you believe CDA 230 immunity should be revoked for platforms that selectively remove content on their platforms.

You could believe that platforms must exercise their editorial capacity free from political bias.


Here’s some background:

A website’s decision to remove content posted by users on its platform is not only protected by CDA 230 but also by the First Amendment. The First Amendment prohibits state actors from infringing private parties’ freedom of religion, speech, and assembly, and under current law, it affords private social media companies the right to control the “speech” shared on their websites. Under the First Amendment, the law can neither censor Twitter’s speech, nor can it compel Twitter to host speech with which it disagrees. Some argue, however, that popular websites should, like state actors, be subject to the requirements of the First Amendment because they are so ubiquitous that they serve a “public function.” <span data-tooltip="Read the <a href='https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online-censorship/'>Executive Order on Preventing Online Censorship</a>">Recent proposals</span> for reform have called into question what constitutes “good faith” content moderation and attempted to restore liability for platforms that edit content in a manner that demonstrates political bias. But, so far, courts have rejected these interpretations of the law.

As a note, our hypothetical cases do not take into consideration liability for the <span data-tooltip="Read the New America <a href='https://www.newamerica.org/oti/reports/how-internet-platforms-are-combating-disinformation-and-misinformation-age-covid-19/'>report</a>">various content moderation practices</span> used by social media platforms today. Platforms in many cases today seek to influence the impact of posts by, for example, labeling them as disinformation, fact-checking them, or downranking their place in the news feed.

  • Prager University v. Google (9th Cir. 2020) – Plaintiff was a nonprofit educational and media organization that created short educational videos and shared them on the Internet. Plaintiff sued YouTube for demonetizing and placing restrictions on the plaintiff’s content, claiming violations of the First Amendment, false advertising under the Lanham Act, and other state laws. The Ninth Circuit upheld the dismissal of the plaintiff’s First Amendment claim, reasoning that the First Amendment did not apply to the defendant because it is a private entity, not a state actor. State actors include federal, state, and local governments as well as some private actors who perform “public functions” such as running an election, operating a company town “and not much else”–YouTube did not meet this demanding criteria. The Ninth Circuit also upheld the dismissal of the plaintiff’s claim under the Lanham Act. It found that none of YouTube’s statements were actionable because they did not constitute a “false or misleading representation of fact in commercial advertising or promotion that misrepresents the nature, characteristics, qualities, or geographic origin of his or her or another person’s goods, services, or commercial activities.”
  • See also Domen v. Vimeo (S.D.N.Y. 2020), Federal Agency of News LLC v. Facebook (2020 WL 137154) (N.D. Cal. 2020); Sikhs for Justice v. Facebook (144 F. Supp. 3d 1088) (N.D. Cal. 2015).

Your answer indicates that you oppose CDA 230 immunity for platforms.

You could think that platforms owe their audiences an obligation to not spread harmful content.

Here’s some background:

A website’s decision to remove content posted by users on its platform is not only protected by CDA 230 but also by the First Amendment. The First Amendment prohibits state actors from infringing private parties’ freedom of religion, speech, and assembly, and under current law, it affords private social media companies the right to control the “speech” shared on their websites. Under the First Amendment, the law can neither censor Twitter’s speech, nor can it compel Twitter to host speech with which it disagrees. Some argue, however, that popular websites should, like state actors, be subject to the requirements of the First Amendment because they are so ubiquitous that they serve a “public function.” <span data-tooltip="Read the <a href='https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online-censorship/'>Executive Order on Preventing Online Censorship</a>">Recent proposals</span> for reform have called into question what constitutes “good faith” content moderation and attempted to restore liability for platforms that edit content in a manner that demonstrates political bias. But, so far, courts have rejected these interpretations of the law.

As a note, our hypothetical cases do not take into consideration liability for the <span data-tooltip="Read the New America <a href='https://www.newamerica.org/oti/reports/how-internet-platforms-are-combating-disinformation-and-misinformation-age-covid-19/'>report</a>">various content moderation practices</span> used by social media platforms today. Platforms in many cases today seek to influence the impact of posts by, for example, labeling them as disinformation, fact-checking them, or downranking their place in the news feed.

  • Prager University v. Google (9th Cir. 2020) – Plaintiff was a nonprofit educational and media organization that created short educational videos and shared them on the Internet. Plaintiff sued YouTube for demonetizing and placing restrictions on the plaintiff’s content, claiming violations of the First Amendment, false advertising under the Lanham Act, and other state laws. The Ninth Circuit upheld the dismissal of the plaintiff’s First Amendment claim, reasoning that the First Amendment did not apply to the defendant because it is a private entity, not a state actor. State actors include federal, state, and local governments as well as some private actors who perform “public functions” such as running an election, operating a company town “and not much else”–YouTube did not meet this demanding criteria. The Ninth Circuit also upheld the dismissal of the plaintiff’s claim under the Lanham Act. It found that none of YouTube’s statements were actionable because they did not constitute a “false or misleading representation of fact in commercial advertising or promotion that misrepresents the nature, characteristics, qualities, or geographic origin of his or her or another person’s goods, services, or commercial activities.”
  • See also Domen v. Vimeo (S.D.N.Y. 2020), Federal Agency of News LLC v. Facebook (2020 WL 137154) (N.D. Cal. 2020); Sikhs for Justice v. Facebook (144 F. Supp. 3d 1088) (N.D. Cal. 2015).

So, the law can’t necessarily require platforms to take any speech down, or keep it up. But, they can shape the immunity of CDA 230 offered to encourage or discourage content moderation.

In the next example, consider whether reform to CDA 230 would actually prevent the following harm.

Voter Suppression

Voter Suppression
Note: This post has been fabricated.

Hypothetical Case

  • Russian agents wanted to change the outcome of the 2020 election again. They launched a disinformation campaign in the swing state of Arizona to suppress voter turnout. Using a handle that looks like a news organization, “ArizonaMediaandCO,” they maliciously spread information to Arizona voters that Election Day has been delayed due to the pandemic.  
  • A Twitter user who recognized the Tweet depicted as disinformation reported the Tweet to Twitter on the day of the election, minutes after it was posted.
  • Twitter did not investigate the Tweet in time to take down the Tweet before the election, and it remained on the site.
  • Several hundreds of thousands of Joe Biden supporters, fearing for their health and safety, stayed home. President Trump carried Arizona by less than a hundred thousand votes and won reelection by a narrow margin with Arizona’s 11 electoral votes.
  • The Biden campaign sued Twitter under Arizona <span data-tooltip='<ol type="A"><li>It is unlawful for a person knowingly by force, threats, menaces, bribery or any corrupt means, either directly or indirectly:<ol><li>To attempt to influence an elector in casting his vote or to deter him from casting his vote.<li>To attempt to awe, restrain, hinder or disturb an elector in the free exercise of the right of suffrage.<li>To defraud an elector by deceiving and causing him to vote for a different person for an office or for a different measure than he intended or desired to vote for.</ol><li>A person who violates any provision of this section is guilty of a class 5 felony.</ol><br>Ariz. Rev. Stat. Ann. § 16-1006'>election law</span>.

/6

3

Question

If you were in charge at Twitter, what would you do about ArizonaMediaandCO’s tweet?
So, what should the law say about this?

Your answer indicates that you support CDA 230 immunity for platforms.

You might believe that Twitter has millions of users and can’t investigate every complaint. You could think that platforms should enable individuals to access all kinds of information or that users have the right to say what they want online.

Here’s some background:

We’re actually not sure whether this hypothetical lawsuit by the Biden campaign would even succeed. The only <span data-tooltip="<a href=https://law.justia.com/cases/federal/district-courts/arizona/azdce/2:2010cv01902/549648/29/''>See Arizona Green Party v. Bennett (D. Ariz. 2010)</a>">case</span> to discuss this provision in Arizona’s election law was not successful. What this tells us is that sometimes, even if CDA 230 immunity does not protect platforms from lawsuits, platforms might still not change their behavior because other laws do not require them to do so.

CDA 230 is a strong legal protection against many types of lawsuits addressing harms that result from activity on a website. But in order to prevail in court, parties also need a strong legal claim that a harm occurred. The Arizona voter suppression law in this case may be one example of a weak legal case that harm occurred. This is why previous reforms to CDA 230, such as SESTA (“Stop Enabling Sex Traffickers Act”) and FOSTA (“Fight Online Sex Trafficking Act”) created an exception to CDA 230 immunity and amended other laws to make it easier to bring lawsuits against interactive computer services to fight child pornography and sex trafficking issues. But, <span data-tooltip="See <a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3362975'>Eric Goldman, The Complicated Story of FOSTA and Section 230, First Amendment Law Review, Vol. 17, page 279, 2019</a>">there is evidence</span> that FOSTA reform hurt sex workers more than its advocates against sex trafficking realized. And, recall that under the First Amendment, the government would have a hard time criminalizing election-related speech.

  • See The Lawyers’ Committee for Civil Rights, Deceptive Election Practice and Voter Intimidation (2012), <a href="https://lawyerscommittee.org/wp-content/uploads/2015/07/DeceptivePracticesReportJuly2012FINALpdf.pdf.">available on the Lawyers' Committee website</a>.
  • FOSTA was intended to strengthen legal claims against individuals participating in online sex trafficking. Some <span data-tooltip="<a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3362975'>See Emily J. Born, Too Far and Not Far Enough: Understanding the Impact of FOSTA, 94 NYU Law Review 1623 (2019).</a>">scholars</span> have argued that it has not achieved these goals. Emily J. Born writes, “Not only is [FOSTA] creating issues for sex workers and free speech, but it is not even accomplishing its law enforcement objective because it does not necessarily create liability for the type of behavior it was enacted in response to.”

Your answer indicates that you think CDA 230 immunity should not protect platforms in every instance.

You might believe that CDA 230 should not encourage platforms to flout election law. Because Arizona has a law against voter suppression, voters should be able to hold Twitter accountable if it does not comply.

Here’s some background:

We’re actually not sure whether this hypothetical lawsuit by the Biden campaign would even succeed. The only <span data-tooltip="<a href=https://law.justia.com/cases/federal/district-courts/arizona/azdce/2:2010cv01902/549648/29/''>See Arizona Green Party v. Bennett (D. Ariz. 2010)</a>">case</span> to discuss this provision in Arizona’s election law was not successful. What this tells us is that sometimes, even if CDA 230 immunity does not protect platforms from lawsuits, platforms might still not change their behavior because other laws do not require them to do so.

CDA 230 is a strong legal protection against many types of lawsuits addressing harms that result from activity on a website. But in order to prevail in court, parties also need a strong legal claim that a harm occurred. The Arizona voter suppression law in this case may be one example of a weak legal case that harm occurred. This is why previous reforms to CDA 230, such as SESTA (“Stop Enabling Sex Traffickers Act”) and FOSTA (“Fight Online Sex Trafficking Act”) created an exception to CDA 230 immunity and amended other laws to make it easier to bring lawsuits against interactive computer services to fight child pornography and sex trafficking issues. But, <span data-tooltip="See <a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3362975'>Eric Goldman, The Complicated Story of FOSTA and Section 230, First Amendment Law Review, Vol. 17, page 279, 2019</a>">there is evidence</span> that FOSTA reform hurt sex workers more than its advocates against sex trafficking realized. And, recall that under the First Amendment, the government would have a hard time criminalizing election-related speech.

  • See The Lawyers’ Committee for Civil Rights, Deceptive Election Practice and Voter Intimidation (2012), <a href="https://lawyerscommittee.org/wp-content/uploads/2015/07/DeceptivePracticesReportJuly2012FINALpdf.pdf.">available on the Lawyers' Committee website</a>.
  • FOSTA was intended to strengthen legal claims against individuals participating in online sex trafficking. Some <span data-tooltip="<a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3362975'>See Emily J. Born, Too Far and Not Far Enough: Understanding the Impact of FOSTA, 94 NYU Law Review 1623 (2019).</a>">scholars</span> have argued that it has not achieved these goals. Emily J. Born writes, “Not only is [FOSTA] creating issues for sex workers and free speech, but it is not even accomplishing its law enforcement objective because it does not necessarily create liability for the type of behavior it was enacted in response to.”

Your answer indicates that you oppose CDA 230 immunity for platforms.

You might believe that platforms should ensure users are not misled by content posted on their website. You could think that Twitter has a special obligation to monitor its website for disinformation that could influence an election.

Here’s some background:

We’re actually not sure whether this hypothetical lawsuit by the Biden campaign would even succeed. The only <span data-tooltip="<a href=https://law.justia.com/cases/federal/district-courts/arizona/azdce/2:2010cv01902/549648/29/''>See Arizona Green Party v. Bennett (D. Ariz. 2010)</a>">case</span> to discuss this provision in Arizona’s election law was not successful. What this tells us is that sometimes, even if CDA 230 immunity does not protect platforms from lawsuits, platforms might still not change their behavior because other laws do not require them to do so.

CDA 230 is a strong legal protection against many types of lawsuits addressing harms that result from activity on a website. But in order to prevail in court, parties also need a strong legal claim that a harm occurred. The Arizona voter suppression law in this case may be one example of a weak legal case that harm occurred. This is why previous reforms to CDA 230, such as SESTA (“Stop Enabling Sex Traffickers Act”) and FOSTA (“Fight Online Sex Trafficking Act”) created an exception to CDA 230 immunity and amended other laws to make it easier to bring lawsuits against interactive computer services to fight child pornography and sex trafficking issues. But, <span data-tooltip="See <a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3362975'>Eric Goldman, The Complicated Story of FOSTA and Section 230, First Amendment Law Review, Vol. 17, page 279, 2019</a>">there is evidence</span> that FOSTA reform hurt sex workers more than its advocates against sex trafficking realized. And, recall that under the First Amendment, the government would have a hard time criminalizing election-related speech.

  • See The Lawyers’ Committee for Civil Rights, Deceptive Election Practice and Voter Intimidation (2012), <a href="https://lawyerscommittee.org/wp-content/uploads/2015/07/DeceptivePracticesReportJuly2012FINALpdf.pdf.">available on the Lawyers' Committee website</a>.
  • FOSTA was intended to strengthen legal claims against individuals participating in online sex trafficking. Some <span data-tooltip="<a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3362975'>See Emily J. Born, Too Far and Not Far Enough: Understanding the Impact of FOSTA, 94 NYU Law Review 1623 (2019).</a>">scholars</span> have argued that it has not achieved these goals. Emily J. Born writes, “Not only is [FOSTA] creating issues for sex workers and free speech, but it is not even accomplishing its law enforcement objective because it does not necessarily create liability for the type of behavior it was enacted in response to.”

So, reforming CDA 230 might not prevent harms if there isn’t another law under which plaintiffs have a cause of action. And, the First Amendment might protect harmful speech anyway.

In the next example, consider what should happen when the law does already proscribe certain kinds of commercial speech.


Tort Claim

Tort Claim
Note: This post has been fabricated.

Hypothetical Case

  • Amazon is an online marketplace with over 353 million products. One such product is “silver solution,” sold by a third-party, COVIDCURES, Inc. Celebrities and politicians have been touting colloidal silver as a cure for COVID-19, despite the lack of scientific evidence.
  • Regulators, such as the Federal Trade Commission and Food & Drug Administration, threatened to go after companies selling “scam” COVID-19 treatments.
  • Amazon’s customers reported COVIDCURES to Amazon for selling fake medicine shortly after it was listed, but Amazon did not remove the product from its marketplace.
  • After COVIDCURES had remained on Amazon for one month, the Federal Trade Commission sued COVIDCURES and Amazon for misleading marketing to sell <span data-tooltip='The Federal Trade Commission Act prohibits "unfair or deceptive act[s] or practice[s] in or affecting commerce." 15 U.S.C. § 45(a)(1) (1914)).<p><a href="https://www.ftc.gov/system/files/documents/public_statements/410531/831014deceptionstmt.pdf">Read the FTC Policy Statement on Deception</a>'>fake cures</span> for COVID-19.
  • Taylor, an individual worried about contracting COVID-19, bought COVIDCURES from Amazon. Unknown to Taylor, colloidal silver decreases the absorption of common antibiotics. Because colloidal silver interfered with Taylor’s treatment of another infection, Taylor contracted sepsis and died. Taylor’s family sues Amazon and COVIDCURES under <span data-tooltip='California Consumer Legal Remedies Act (Cal. Civil Code § 1750)<ul><li>Declares as unlawful "methods of competition and unfair or deceptive acts or practices undertaken by any person in a transaction intended to result or which results in the sale or lease of goods or services to any consumer"<li>Consumers can seek actual damages for violations of the Act</ul>'>California consumer protection law</span>.

/6

4

Question

If you were in charge at Amazon, what would you do about COVIDCURES?
So, what should the law say about this?

Your answer indicates that you support CDA 230 immunity for platforms.

You might think that Amazon is a massive platform and the company does not have the bandwidth to do quality control for every product sold by third parties on their website.

Here’s some background:

CDA 230 immunizes platforms from being treated as “publishers” or “information content providers.” Even if platforms create a space, such as a marketplace, that encourages third-party users to post certain kinds of information, courts have rarely found platforms to be publishers of user-provided content.

And CDA 230 shields platforms from civil suits by all parties. This could include individuals who suffered from the website’s activities, as well as public authorities such as the Federal Trade Commission and state attorneys general, who otherwise possess the legal authority to enforce consumer protection laws. One question scholars and commentators have raised is: who should have the power to hold platforms legally responsible for the activity of users on their website—individuals, the government, both, or neither?

Proponents of CDA 230 argue that if any individual harmed by a post can sue a website, platforms will “<span data-tooltip="Fair Housing Council v. Roomates.com (9th Cir. 2008), <a href='http://cdn.ca9.uscourts.gov/datastore/opinions/2008/04/02/0456916.pdf'>opinion available</a>.">face death by ten thousand duck-bites.</span>” It is simply <span data-tooltip="See Professor Eric Goldman’s <a href='https://www.wsj.com/articles/should-amazon-be-responsible-when-its-vendors-products-turn-out-to-be-unsafe-11582898971.'>argument</a>">economically infeasible</span> for Internet companies to defend themselves against multiple lawsuits, while providing a marketplace the size of Amazon. The cost of these suits might bankrupt an up-and-coming startup competitor, <span data-tooltip="For more on CDA 230’s competitive effects, see <a href='https://law.yale.edu/sites/default/files/area/center/isp/documents/new_controversies_in_intermediary_liability_law.pdf.'>Professor Eric Goldman’s essay</a>">entrenching big tech</span>. Opponents claim that CDA 230 immunity was for “publishers” and “speakers,” but that what online marketplaces, such as Amazon, do by <span data-tooltip="For more on this idea, see <a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3532691'>Professor Citron’s research</a>.">selling goods is far different from speech</span>.


  • Various state laws on unfair competition could hold platforms liable for failing to remove this sort of content. For example, the California Consumer Legal Remedies Act (Cal. Civil Code § 1750) declares as unlawful “methods of competition and unfair or deceptive acts or practices undertaken by any person in a transaction intended to result or which results in the sale or lease of goods or services to any consumer.” Consumers can seek actual damages for violations of the Act.
  • We don’t have the answer yet about what the law might say about this. Cases so far suggest that CDA 230 could shield platforms from immunity against FTC Act claims, but there are few on the issue. This could be a result of the FTC’s enforcement strategy; the agency may choose to use its resources to pursue claims that would not be presumptively blocked by CDA 230.
  • Federal Trade Commission v. LeadClick Media, LLC (2d Cir. 2016) – The FTC claimed that a manager of a network of online advertisers and its parent company had used deceptive practices to market weight loss products because affiliates of the company operated fake news sites to market the products of its client, LeanSpa, LLC. The defendant, found liable under the FTC Act, claimed that CDA 230 precluded liability against their company. The Second Circuit found that “other circuits . . . have recognized 230 immunity as broad,” citing to Almeida v. Amazon.com (11th Cir. 2006), which held that “[t]he majority of federal circuits have interpreted the CDA to establish broad federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.” The court assessed whether LeadClick met the criteria required to qualify for CDA 230 immunity. But it concluded that LeadClick did not qualify because it was an “information content provider” of deceptive content on the fake news sites.
  • Papataros v. Amazon.com, Inc. (D.N.J. 2019) – The plaintiff purchased a defective hoverboard from Amazon, which injured her son. Citing the precedent of Oberdorf v. Amazon (3rd Cir. 2019), the court held that CDA 230 protected Amazon from liability as a “speaker” and dismissed the plaintiff’s claims based on defective warnings or failure to warn. However, New Jersey law imposes strict liability on “sellers” of defective products.” The court ruled that Amazon could be liable as a “seller” for the defective product.
  • Fair Housing Council of San Fernando Valley v. Roommates.com, LLC (9th Cir. 2008) – Roommates.com required third parties to fill out questionnaires that induced respondents to express preferences in violation of fair housing laws. The court held that these forms were not protected by Section 230(c)(1). Instead, Roommates.com could be liable as an “information content provider” because it co-developed profile content. (“Roommate is “responsible” at least “in part” for each subscriber's profile page, because every such page is a collaborative effort between Roommate and the subscriber.”)

Your answer indicates that you believe CDA 230 should not always protect platforms.

You might believe platforms have an obligation to heed the warnings of public officials. If platforms face lawsuits for harmful products on their websites, they might voluntarily decide to clean up their websites more.

Here’s some background:

CDA 230 immunizes platforms from being treated as “publishers” or “information content providers.” Even if platforms create a space, such as a marketplace, that encourages third-party users to post certain kinds of information, courts have rarely found platforms to be publishers of user-provided content.

And CDA 230 shields platforms from civil suits by all parties. This could include individuals who suffered from the website’s activities, as well as public authorities such as the Federal Trade Commission and state attorneys general, who otherwise possess the legal authority to enforce consumer protection laws. One question scholars and commentators have raised is: who should have the power to hold platforms legally responsible for the activity of users on their website—individuals, the government, both, or neither?

Proponents of CDA 230 argue that if any individual harmed by a post can sue a website, platforms will “<span data-tooltip="Fair Housing Council v. Roomates.com (9th Cir. 2008), <a href='http://cdn.ca9.uscourts.gov/datastore/opinions/2008/04/02/0456916.pdf'>opinion available</a>.">face death by ten thousand duck-bites.</span>” It is simply <span data-tooltip="See Professor Eric Goldman’s <a href='https://www.wsj.com/articles/should-amazon-be-responsible-when-its-vendors-products-turn-out-to-be-unsafe-11582898971.'>argument</a>">economically infeasible</span> for Internet companies to defend themselves against multiple lawsuits, while providing a marketplace the size of Amazon. The cost of these suits might bankrupt an up-and-coming startup competitor, <span data-tooltip="For more on CDA 230’s competitive effects, see <a href='https://law.yale.edu/sites/default/files/area/center/isp/documents/new_controversies_in_intermediary_liability_law.pdf.'>Professor Eric Goldman’s essay</a>">entrenching big tech</span>. Opponents claim that CDA 230 immunity was for “publishers” and “speakers,” but that what online marketplaces, such as Amazon, do by <span data-tooltip="For more on this idea, see <a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3532691'>Professor Citron’s research</a>.">selling goods is far different from speech</span>.


  • Various state laws on unfair competition could hold platforms liable for failing to remove this sort of content. For example, the California Consumer Legal Remedies Act (Cal. Civil Code § 1750) declares as unlawful “methods of competition and unfair or deceptive acts or practices undertaken by any person in a transaction intended to result or which results in the sale or lease of goods or services to any consumer.” Consumers can seek actual damages for violations of the Act.
  • We don’t have the answer yet about what the law might say about this. Cases so far suggest that CDA 230 could shield platforms from immunity against FTC Act claims, but there are few on the issue. This could be a result of the FTC’s enforcement strategy; the agency may choose to use its resources to pursue claims that would not be presumptively blocked by CDA 230.
  • Federal Trade Commission v. LeadClick Media, LLC (2d Cir. 2016) – The FTC claimed that a manager of a network of online advertisers and its parent company had used deceptive practices to market weight loss products because affiliates of the company operated fake news sites to market the products of its client, LeanSpa, LLC. The defendant, found liable under the FTC Act, claimed that CDA 230 precluded liability against their company. The Second Circuit found that “other circuits . . . have recognized 230 immunity as broad,” citing to Almeida v. Amazon.com (11th Cir. 2006), which held that “[t]he majority of federal circuits have interpreted the CDA to establish broad federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.” The court assessed whether LeadClick met the criteria required to qualify for CDA 230 immunity. But it concluded that LeadClick did not qualify because it was an “information content provider” of deceptive content on the fake news sites.
  • Papataros v. Amazon.com, Inc. (D.N.J. 2019) – The plaintiff purchased a defective hoverboard from Amazon, which injured her son. Citing the precedent of Oberdorf v. Amazon (3rd Cir. 2019), the court held that CDA 230 protected Amazon from liability as a “speaker” and dismissed the plaintiff’s claims based on defective warnings or failure to warn. However, New Jersey law imposes strict liability on “sellers” of defective products.” The court ruled that Amazon could be liable as a “seller” for the defective product.
  • Fair Housing Council of San Fernando Valley v. Roommates.com, LLC (9th Cir. 2008) – Roommates.com required third parties to fill out questionnaires that induced respondents to express preferences in violation of fair housing laws. The court held that these forms were not protected by Section 230(c)(1). Instead, Roommates.com could be liable as an “information content provider” because it co-developed profile content. (“Roommate is “responsible” at least “in part” for each subscriber's profile page, because every such page is a collaborative effort between Roommate and the subscriber.”)

Your answer indicates that you oppose CDA 230 immunity for platforms.

You might believe that Amazon has an obligation to its customers to ensure no one is deceived by products it sells. You could think that platforms should only sell products that they can verify are safe.

Here’s some background:

CDA 230 immunizes platforms from being treated as “publishers” or “information content providers.” Even if platforms create a space, such as a marketplace, that encourages third-party users to post certain kinds of information, courts have rarely found platforms to be publishers of user-provided content.

And CDA 230 shields platforms from civil suits by all parties. This could include individuals who suffered from the website’s activities, as well as public authorities such as the Federal Trade Commission and state attorneys general, who otherwise possess the legal authority to enforce consumer protection laws. One question scholars and commentators have raised is: who should have the power to hold platforms legally responsible for the activity of users on their website—individuals, the government, both, or neither?

Proponents of CDA 230 argue that if any individual harmed by a post can sue a website, platforms will “<span data-tooltip="Fair Housing Council v. Roomates.com (9th Cir. 2008), <a href='http://cdn.ca9.uscourts.gov/datastore/opinions/2008/04/02/0456916.pdf'>opinion available</a>.">face death by ten thousand duck-bites.</span>” It is simply <span data-tooltip="See Professor Eric Goldman’s <a href='https://www.wsj.com/articles/should-amazon-be-responsible-when-its-vendors-products-turn-out-to-be-unsafe-11582898971.'>argument</a>">economically infeasible</span> for Internet companies to defend themselves against multiple lawsuits, while providing a marketplace the size of Amazon. The cost of these suits might bankrupt an up-and-coming startup competitor, <span data-tooltip="For more on CDA 230’s competitive effects, see <a href='https://law.yale.edu/sites/default/files/area/center/isp/documents/new_controversies_in_intermediary_liability_law.pdf.'>Professor Eric Goldman’s essay</a>">entrenching big tech</span>. Opponents claim that CDA 230 immunity was for “publishers” and “speakers,” but that what online marketplaces, such as Amazon, do by <span data-tooltip="For more on this idea, see <a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3532691'>Professor Citron’s research</a>.">selling goods is far different from speech</span>.


  • Various state laws on unfair competition could hold platforms liable for failing to remove this sort of content. For example, the California Consumer Legal Remedies Act (Cal. Civil Code § 1750) declares as unlawful “methods of competition and unfair or deceptive acts or practices undertaken by any person in a transaction intended to result or which results in the sale or lease of goods or services to any consumer.” Consumers can seek actual damages for violations of the Act.
  • We don’t have the answer yet about what the law might say about this. Cases so far suggest that CDA 230 could shield platforms from immunity against FTC Act claims, but there are few on the issue. This could be a result of the FTC’s enforcement strategy; the agency may choose to use its resources to pursue claims that would not be presumptively blocked by CDA 230.
  • Federal Trade Commission v. LeadClick Media, LLC (2d Cir. 2016) – The FTC claimed that a manager of a network of online advertisers and its parent company had used deceptive practices to market weight loss products because affiliates of the company operated fake news sites to market the products of its client, LeanSpa, LLC. The defendant, found liable under the FTC Act, claimed that CDA 230 precluded liability against their company. The Second Circuit found that “other circuits . . . have recognized 230 immunity as broad,” citing to Almeida v. Amazon.com (11th Cir. 2006), which held that “[t]he majority of federal circuits have interpreted the CDA to establish broad federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.” The court assessed whether LeadClick met the criteria required to qualify for CDA 230 immunity. But it concluded that LeadClick did not qualify because it was an “information content provider” of deceptive content on the fake news sites.
  • Papataros v. Amazon.com, Inc. (D.N.J. 2019) – The plaintiff purchased a defective hoverboard from Amazon, which injured her son. Citing the precedent of Oberdorf v. Amazon (3rd Cir. 2019), the court held that CDA 230 protected Amazon from liability as a “speaker” and dismissed the plaintiff’s claims based on defective warnings or failure to warn. However, New Jersey law imposes strict liability on “sellers” of defective products.” The court ruled that Amazon could be liable as a “seller” for the defective product.
  • Fair Housing Council of San Fernando Valley v. Roommates.com, LLC (9th Cir. 2008) – Roommates.com required third parties to fill out questionnaires that induced respondents to express preferences in violation of fair housing laws. The court held that these forms were not protected by Section 230(c)(1). Instead, Roommates.com could be liable as an “information content provider” because it co-developed profile content. (“Roommate is “responsible” at least “in part” for each subscriber's profile page, because every such page is a collaborative effort between Roommate and the subscriber.”)

So, theoretically, CDA 230 covers more than just “speech” that occurs on platforms. If there was to be reform, how would the Internet change?

In the example below, consider one proposal to change CDA 230.

Third-party misinformation and encryption

Third-party misinformation and encryption
Note: This post has been fabricated.

Hypothetical Case

  • Political dissidents wanted to protest Singapore’s strict lockdowns imposed as a result of COVID-19. They thought that if they could get enough citizens to break quarantine, the government would recognize how unpopular the restrictions against travel and commerce were.
  • The political dissidents launched a disinformation campaign on WhatsApp. They sent individuals false information about the COVID-19 lockdowns. The users even put “green check marks” in the account names to trick individuals into thinking the account was verified.
  • Individuals who received the message notified WhatsApp that this account was spreading fake news about the Singaporean government’s response to COVID-19.
  • WhatsApp did not clamp down on messages pretending to be sent by the government. WhatsApp reiterated that <span data-tooltip="<a href='https://www.wsj.com/articles/facebooks-whatsapp-battles-coronavirus-misinformation-11586256870'>See the measures Whatsapp takes to fight misinformation here</a>">it is not technologically possible to monitor the content of messages on its platform because they are encrypted end-to-end</span>.
  • Casey, a Singaporean resident, received disinformation from Whatsapp that the lockdown restrictions have been eased. When he left his apartment, the government of Singapore jailed him for six weeks for violating the actual stay-at-home order.
  • Casey and the Singaporean government sued WhatsApp (in the United States) for damages.


/6

5

Question

If you were in charge at Whatsapp, what would you do?
So, what should the law say about this?

Your answer indicates that you support CDA 230 immunity for platforms.

You might believe that Whatsapp should not violate its promise to customers of an end-to-end encrypted communications service. You could think that Whatsapp might not be able to monitor the billions of messages it delivers per day.

Here’s some background:

The law currently <span data-tooltip='<a href="https://www.lawfareblog.com/herrick-v-grindr-why-section-230-communications-decency-act-must-be-fixed">See Carrie Goldberg’s ideas on Herrick v. Grindr</a>'>does not penalize platforms for defective design</span>. This means that if a platform makes certain design choices, such as end-to-end encryption, and that feature causes harm, the platform is not responsible. Lawmakers are currently debating whether CDA 230 immunity should preclude lawsuits against platforms for content posted in encrypted channels. Proposals to reform to CDA 230 would grant immunity only if platforms take action to protect individuals from harmful content. Privacy advocates support CDA 230, as it exists, because encouraging platforms to take a more active role in supervising content that would otherwise receive little attention would <span data-tooltip='<a href="https://signal.org/blog/earn-it/">Learn why privacy advocates oppose the EARN IT Act</a>'>eliminate encrypted channels</span>. This could increase government surveillance of private communications.

  • Herrick v. Grindr (2d Cir. 2019) – Plaintiff sued Grindr, a “hook-up” app, which his ex-boyfriend used to launch a harassment campaign, by creating fake profiles in the plaintiff’s name and directing users to contact the plaintiff at his home and work. Plaintiff sued Grindr for negligence (arguing that the app was defectively designed by lacking safety features to prevent this type of dangerous conduct), deceptive business practices, false advertising, and negligent misrepresentation. The court dismissed under CDA 230(c) because it confers “broad federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.”
  • See also Ben Ezra Weinstein & Co. v. AOL (10th Cir. 2000) – The plaintiff, a computer software company, sued AOL for defamation, claiming that AOL published incorrect stock quote information (based on third-party providers). The court dismissed the plaintiff’s claim under CDA 230 because AOL did not create or develop any of the content such that it became an information content provider.
  • Doe v. SexSearch.com (N.D. Ohio 2007) – The plaintiff was arrested for having sex with a minor that he met on SexSearch.com. On the website, the minor promised that she was of age, not her actual age: 14-years-old. The defendant, website SexSearch, stated in its terms and conditions that they would "review, verify and approve" website profiles. Citing a similar case in which the plaintiff was a victim of sexual assault from lying about her age to use MySpace, the court dismissed the case. (Doe v. Myspace (W.D. Tex. 2007). It concluded that SexSearch was not a “publisher” of the content that resulted in the plaintiff’s arrest.
  • Is there a difference between leaving messages unencrypted versus encrypted messages but having the platform hold a key that can unencrypt them? These two are functionally the same, as explained in this EFF article <a href="https://www.eff.org/deeplinks/2019/09/carnegie-experts-should-know-defending-encryption-isnt-absolutist-position">“Defending Encryption Isn't an ‘Absolutist’ Position”</a>.

Your answer indicates that you think CDA 230 immunity should be revoked for platforms in some cases.

You could believe that Whatsapp should try to preserve individuals’ privacy and that Whatsapp should be allowed to encrypt messages. That said, you might want it to still be accountable when those messages cause harm.

Here’s some background:

The law currently <span data-tooltip='<a href="https://www.lawfareblog.com/herrick-v-grindr-why-section-230-communications-decency-act-must-be-fixed">See Carrie Goldberg’s ideas on Herrick v. Grindr</a>'>does not penalize platforms for defective design</span>. This means that if a platform makes certain design choices, such as end-to-end encryption, and that feature causes harm, the platform is not responsible. Lawmakers are currently debating whether CDA 230 immunity should preclude lawsuits against platforms for content posted in encrypted channels. Proposals to reform to CDA 230 would grant immunity only if platforms take action to protect individuals from harmful content. Privacy advocates support CDA 230, as it exists, because encouraging platforms to take a more active role in supervising content that would otherwise receive little attention would <span data-tooltip='<a href="https://signal.org/blog/earn-it/">Learn why privacy advocates oppose the EARN IT Act</a>'>eliminate encrypted channels</span>. This could increase government surveillance of private communications.

  • Herrick v. Grindr (2d Cir. 2019) – Plaintiff sued Grindr, a “hook-up” app, which his ex-boyfriend used to launch a harassment campaign, by creating fake profiles in the plaintiff’s name and directing users to contact the plaintiff at his home and work. Plaintiff sued Grindr for negligence (arguing that the app was defectively designed by lacking safety features to prevent this type of dangerous conduct), deceptive business practices, false advertising, and negligent misrepresentation. The court dismissed under CDA 230(c) because it confers “broad federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.”
  • See also Ben Ezra Weinstein & Co. v. AOL (10th Cir. 2000) – The plaintiff, a computer software company, sued AOL for defamation, claiming that AOL published incorrect stock quote information (based on third-party providers). The court dismissed the plaintiff’s claim under CDA 230 because AOL did not create or develop any of the content such that it became an information content provider.
  • Doe v. SexSearch.com (N.D. Ohio 2007) – The plaintiff was arrested for having sex with a minor that he met on SexSearch.com. On the website, the minor promised that she was of age, not her actual age: 14-years-old. The defendant, website SexSearch, stated in its terms and conditions that they would "review, verify and approve" website profiles. Citing a similar case in which the plaintiff was a victim of sexual assault from lying about her age to use MySpace, the court dismissed the case. (Doe v. Myspace (W.D. Tex. 2007). It concluded that SexSearch was not a “publisher” of the content that resulted in the plaintiff’s arrest.
  • Is there a difference between leaving messages unencrypted versus encrypted messages but having the platform hold a key that can unencrypt them? These two are functionally the same, as explained in this EFF article <a href="https://www.eff.org/deeplinks/2019/09/carnegie-experts-should-know-defending-encryption-isnt-absolutist-position">“Defending Encryption Isn't an ‘Absolutist’ Position”</a>.

Your answer indicates that you oppose CDA 230 immunity for platforms.

You might think that Whatsapp should actively monitor messages related to the public health crisis. You could believe that preserving users’ privacy is less important than protecting individuals from health misinformation.

Here’s some background:

The law currently <span data-tooltip='<a href="https://www.lawfareblog.com/herrick-v-grindr-why-section-230-communications-decency-act-must-be-fixed">See Carrie Goldberg’s ideas on Herrick v. Grindr</a>'>does not penalize platforms for defective design</span>. This means that if a platform makes certain design choices, such as end-to-end encryption, and that feature causes harm, the platform is not responsible. Lawmakers are currently debating whether CDA 230 immunity should preclude lawsuits against platforms for content posted in encrypted channels. Proposals to reform to CDA 230 would grant immunity only if platforms take action to protect individuals from harmful content. Privacy advocates support CDA 230, as it exists, because encouraging platforms to take a more active role in supervising content that would otherwise receive little attention would <span data-tooltip='<a href="https://signal.org/blog/earn-it/">Learn why privacy advocates oppose the EARN IT Act</a>'>eliminate encrypted channels</span>. This could increase government surveillance of private communications.

  • Herrick v. Grindr (2d Cir. 2019) – Plaintiff sued Grindr, a “hook-up” app, which his ex-boyfriend used to launch a harassment campaign, by creating fake profiles in the plaintiff’s name and directing users to contact the plaintiff at his home and work. Plaintiff sued Grindr for negligence (arguing that the app was defectively designed by lacking safety features to prevent this type of dangerous conduct), deceptive business practices, false advertising, and negligent misrepresentation. The court dismissed under CDA 230(c) because it confers “broad federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.”
  • See also Ben Ezra Weinstein & Co. v. AOL (10th Cir. 2000) – The plaintiff, a computer software company, sued AOL for defamation, claiming that AOL published incorrect stock quote information (based on third-party providers). The court dismissed the plaintiff’s claim under CDA 230 because AOL did not create or develop any of the content such that it became an information content provider.
  • Doe v. SexSearch.com (N.D. Ohio 2007) – The plaintiff was arrested for having sex with a minor that he met on SexSearch.com. On the website, the minor promised that she was of age, not her actual age: 14-years-old. The defendant, website SexSearch, stated in its terms and conditions that they would "review, verify and approve" website profiles. Citing a similar case in which the plaintiff was a victim of sexual assault from lying about her age to use MySpace, the court dismissed the case. (Doe v. Myspace (W.D. Tex. 2007). It concluded that SexSearch was not a “publisher” of the content that resulted in the plaintiff’s arrest.
  • Is there a difference between leaving messages unencrypted versus encrypted messages but having the platform hold a key that can unencrypt them? These two are functionally the same, as explained in this EFF article <a href="https://www.eff.org/deeplinks/2019/09/carnegie-experts-should-know-defending-encryption-isnt-absolutist-position">“Defending Encryption Isn't an ‘Absolutist’ Position”</a>.

Do you think CDA 230 is responsible for mis/dis-information on the Internet?

You're a CDA 230 Absolutist

You agree with CDA 230 as it supports content moderation today. <span data-tooltip='<a href="https://www.eff.org/issues/cda230">See the Electronic Frontier Foundation’s discussion of CDA 230</a>'>Many agree</span> that strong immunity for platforms is necessary for the Internet to flourish and innovate as it has since CDA was passed. As the Internet becomes our primary medium for the exchange of ideas, first amendment advocates worry that without CDA 230,  platforms will censor more words and ideas expressed online. To learn more about CDA 230 comprehensively, check out  <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3306737">this explainer</a>.

You’re a CDA 230 Reformer

You might agree with CDA, but think courts have interpreted too expansively to encourage appropriate content moderation. Maybe platforms should bear some responsibility for spreading harmful misinformation, at least when it is brought to their attention and they fail to act. However, you might instead think that platforms censor content selectively, and CDA 230 should require them to protect more free speech on the Internet. We encourage you to <span data-tooltip='<a href="https://www.theverge.com/21273768/section-230-explained-internet-speech-law-definition-guide-free-moderation">See Casey Newton’s discussion of various proposals for reforming CDA 230</a>'>learn more</span> about the variety of academic and legislative proposals to amend CDA 230.

You’re a CDA 230 Abolitionist

You think CDA 230 creates the wrong incentives for content moderation and prevents platforms from addressing the <span data-tooltip='<a href="https://www.lawfareblog.com/herrick-v-grindr-why-section-230-communications-decency-act-must-be-fixed">See Carrie Goldberg’s critique of CDA 230</a>'>real social harms</span> that result from misinformation shared by users. Protecting the free speech of platforms in and of itself was not the original intent of CDA. Rather, it was to encourage proactive moderation by platforms. Since then, many courts have misconstrued the “Good Samaritan” law as granting platforms blanket immunity. Whether it is politics or public health, platforms play a vital role in modern life. They should take a more active role in ensuring their users’ well-being.

About WTF is CDA

WTF is CDA is a Berkman Klein Center Assembly Student Fellowship project by Samuel Clay, Jess Eng, Sahar Kazranian, Sanjana Parikh, and Madeline Salinas. We'd like to thank Jenny Fan for designing the WTF is CDA website. We are immensely grateful for the advice and support from Hilary Ross, Zenzele Best, Jonathan Zittrain, Danielle Citron, Tejas Narechania, Jeff Kosseff, Oumou Ly, John Bowers and the 2020 Assembly Student Fellowship Cohort.

Legal Disclaimer: The information contained in this site is provided for informational purposes only, and should not be construed as legal advice on any subject matter. You should not act or refrain from acting on the basis of any content included in this site without seeking legal or other professional advice. WTF is CDA inherits the Berkman Klein Center privacy policy.

© Samuel Clay, Jess Eng, Sahar Kazranian, Sanjana Parikh, Madeline Salinas.

Meet the team

Samuel Clay
Jess Eng
Sahar Kazranian
Sanjana Parikh
Madeline Salinas

Supported by