Is U.S. law responsible for disinformation and misinformation posted on social media?
Find out what you think.
Recently, the Internet has been suffering from an “infodemic,” a crisis of misinformation and disinformation spreading faster than the COVID-19 pandemic. How should social media platforms deal with disinformation shared on their websites? Should the law require them to do anything? One particular law, Section 230 of the Communications Decency Act (“CDA 230”), affects the answer to this question.
This interactive tool will allow you to explore how the debate around CDA 230 relates to the dissemination of misinformation and disinformation today. Our quiz will help you develop an opinion about how CDA 230 impacts the moderation behavior and decisions of platforms.
Last updated: July 7th, 2020
For a more comprehensive view of CDA 230, check out this explainer or this guide.
This website is a project from the Assembly Student Fellowship of the Berkman Klein Center for Internet & Society at Harvard University. This site is currently evolving, and we are actively seeking feedback--please submit any comments here. This stuff is complicated, and involves precise language and subtle concepts -- we're working to make it both as accurate and accessible as can be as we hone our own understandings. And of course this should not be construed as legal advice.
Note: All of the posts used in this tool have been fabricated for educational purposes and should be considered within the context of answering only these questions. The user identities and handles are fictional and the posts’ content are not based on, nor intended to imply, any real-world developments.
Your answer indicates that you support CDA 230 immunity for platforms.
Platforms should be encouraged to craft their own content moderation policies. CDA 230 immunity allows them to do this by protecting them from individuals’ lawsuits when they get it wrong.
Before CDA 230, Internet <span data-tooltip="All-encompassing for website, information intermediary, etc.">platforms</span>, such as Twitter in the example above, faced a difficult tradeoff when deciding whether to remove questionable posts by <span data-tooltip="Speaker or poster of content on a platform, individual or organization">users</span>. If the platform removed some undesirable content posted by their users on their website, the law viewed the website as a "publisher" and held them liable for content, like Senator Smith’s defamatory Tweet, that remained on the website. The drafters of CDA 230 thought making platforms publishers created the wrong incentives because it discouraged platforms from cleaning up their websites.
CDA 230 intended to realign incentives by immunizing platforms from lawsuits even if they voluntarily took actions to remove objectionable content, or if objectionable posts remained up that they could not get around to. They intended to offer this protection to “Good Samaritans,” platforms who screened and removed objectionable content in good faith. While this protection offers broad legal immunity to platforms, it also has several important exceptions: for violations of intellectual property law, federal crimes, violations of the Fight Online Sex Trafficking Act (FOSTA) and Stop Enabling Sex Traffickers Act (SESTA), and violations of the Electronic Communications Privacy Act (ECPA).
Note that CDA 230 also protects any Twitter users who retweeted Senator Smith’s tweet. If Dr. Fauci sued those users, these lawsuits would also be rendered unsuccessful by CDA 230, because it protects not only platforms—<span data-tooltip="<a href='https://www.law.cornell.edu/uscode/text/47/230'>47 U.S.C. §230(c)(1)</a>">but also users of the platform</span>—from liability for information shared by other users of the website.
Your answer indicates that you believe CDA 230 immunity should sometimes protect platforms.
Platforms should be encouraged to craft their own content moderation policies. Sometimes, they may leave up defamatory posts to serve other policy objectives, like supporting the free speech of public figures.
Before CDA 230, Internet <span data-tooltip="All-encompassing for website, information intermediary, etc.">platforms</span>, such as Twitter in the example above, faced a difficult tradeoff when deciding whether to remove questionable posts by <span data-tooltip="Speaker or poster of content on a platform, individual or organization">users</span>. If the platform removed some undesirable content posted by their users on their website, the law viewed the website as a "publisher" and held them liable for content, like Senator Smith’s defamatory Tweet, that remained on the website. The drafters of CDA 230 thought making platforms publishers created the wrong incentives because it discouraged platforms from cleaning up their websites.
CDA 230 intended to realign incentives by immunizing platforms from lawsuits even if they voluntarily took actions to remove objectionable content, or if objectionable posts remained up that they could not get around to. They intended to offer this protection to “Good Samaritans,” platforms who screened and removed objectionable content in good faith. While this protection offers broad legal immunity to platforms, it also has several important exceptions: for violations of intellectual property law, federal crimes, violations of the Fight Online Sex Trafficking Act (FOSTA) and Stop Enabling Sex Traffickers Act (SESTA), and violations of the Electronic Communications Privacy Act (ECPA).
Note that CDA 230 also protects any Twitter users who retweeted Senator Smith’s tweet. If Dr. Fauci sued those users, these lawsuits would also be rendered unsuccessful by CDA 230, because it protects not only platforms—<span data-tooltip="<a href='https://www.law.cornell.edu/uscode/text/47/230'>47 U.S.C. §230(c)(1)</a>">but also users of the platform</span>—from liability for information shared by other users of the website.
Your answer indicates that you oppose CDA 230 immunity for platforms.
According to this view, platforms should affirmatively monitor their websites for harmful content. You might believe that Twitter has an obligation to individuals to fact-check the information that it spreads.
Before CDA 230, Internet <span data-tooltip="All-encompassing for website, information intermediary, etc.">platforms</span>, such as Twitter in the example above, faced a difficult tradeoff when deciding whether to remove questionable posts by <span data-tooltip="Speaker or poster of content on a platform, individual or organization">users</span>. If the platform removed some undesirable content posted by their users on their website, the law viewed the website as a "publisher" and held them liable for content, like Senator Smith’s defamatory Tweet, that remained on the website. The drafters of CDA 230 thought making platforms publishers created the wrong incentives because it discouraged platforms from cleaning up their websites.
CDA 230 intended to realign incentives by immunizing platforms from lawsuits even if they voluntarily took actions to remove objectionable content, or if objectionable posts remained up that they could not get around to. They intended to offer this protection to “Good Samaritans,” platforms who screened and removed objectionable content in good faith. While this protection offers broad legal immunity to platforms, it also has several important exceptions: for violations of intellectual property law, federal crimes, violations of the Fight Online Sex Trafficking Act (FOSTA) and Stop Enabling Sex Traffickers Act (SESTA), and violations of the Electronic Communications Privacy Act (ECPA).
Note that CDA 230 also protects any Twitter users who retweeted Senator Smith’s tweet. If Dr. Fauci sued those users, these lawsuits would also be rendered unsuccessful by CDA 230, because it protects not only platforms—<span data-tooltip="<a href='https://www.law.cornell.edu/uscode/text/47/230'>47 U.S.C. §230(c)(1)</a>">but also users of the platform</span>—from liability for information shared by other users of the website.
In the previous example, we asked you to consider what should happen when a platform doesn’t remove a harmful tweet.
In the next example, consider what happens when a platform does take action.
Your answer indicates that you tend to support CDA 230 immunity for platforms; platforms should be free to moderate the content on their websites without being afraid of legal repercussions.
CDA 230, as well as the First Amendment, currently protect websites from lawsuits by users when their posts are removed from their websites.
A website’s decision to remove content posted by users on its platform is not only protected by CDA 230 but also by the First Amendment. The First Amendment prohibits state actors from infringing private parties’ freedom of religion, speech, and assembly, and under current law, it affords private social media companies the right to control the “speech” shared on their websites. Under the First Amendment, the law can neither censor Twitter’s speech, nor can it compel Twitter to host speech with which it disagrees. Some argue, however, that popular websites should, like state actors, be subject to the requirements of the First Amendment because they are so ubiquitous that they serve a “public function.” <span data-tooltip="Read the <a href='https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online-censorship/'>Executive Order on Preventing Online Censorship</a>">Recent proposals</span> for reform have called into question what constitutes “good faith” content moderation and attempted to restore liability for platforms that edit content in a manner that demonstrates political bias. But, so far, courts have rejected these interpretations of the law.
As a note, our hypothetical cases do not take into consideration liability for the <span data-tooltip="Read the New America <a href='https://www.newamerica.org/oti/reports/how-internet-platforms-are-combating-disinformation-and-misinformation-age-covid-19/'>report</a>">various content moderation practices</span> used by social media platforms today. Platforms in many cases today seek to influence the impact of posts by, for example, labeling them as disinformation, fact-checking them, or downranking their place in the news feed.
Your answer indicates that you believe CDA 230 immunity should be revoked for platforms that selectively remove content on their platforms.
You could believe that platforms must exercise their editorial capacity free from political bias.
A website’s decision to remove content posted by users on its platform is not only protected by CDA 230 but also by the First Amendment. The First Amendment prohibits state actors from infringing private parties’ freedom of religion, speech, and assembly, and under current law, it affords private social media companies the right to control the “speech” shared on their websites. Under the First Amendment, the law can neither censor Twitter’s speech, nor can it compel Twitter to host speech with which it disagrees. Some argue, however, that popular websites should, like state actors, be subject to the requirements of the First Amendment because they are so ubiquitous that they serve a “public function.” <span data-tooltip="Read the <a href='https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online-censorship/'>Executive Order on Preventing Online Censorship</a>">Recent proposals</span> for reform have called into question what constitutes “good faith” content moderation and attempted to restore liability for platforms that edit content in a manner that demonstrates political bias. But, so far, courts have rejected these interpretations of the law.
As a note, our hypothetical cases do not take into consideration liability for the <span data-tooltip="Read the New America <a href='https://www.newamerica.org/oti/reports/how-internet-platforms-are-combating-disinformation-and-misinformation-age-covid-19/'>report</a>">various content moderation practices</span> used by social media platforms today. Platforms in many cases today seek to influence the impact of posts by, for example, labeling them as disinformation, fact-checking them, or downranking their place in the news feed.
Your answer indicates that you oppose CDA 230 immunity for platforms.
You could think that platforms owe their audiences an obligation to not spread harmful content.
A website’s decision to remove content posted by users on its platform is not only protected by CDA 230 but also by the First Amendment. The First Amendment prohibits state actors from infringing private parties’ freedom of religion, speech, and assembly, and under current law, it affords private social media companies the right to control the “speech” shared on their websites. Under the First Amendment, the law can neither censor Twitter’s speech, nor can it compel Twitter to host speech with which it disagrees. Some argue, however, that popular websites should, like state actors, be subject to the requirements of the First Amendment because they are so ubiquitous that they serve a “public function.” <span data-tooltip="Read the <a href='https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online-censorship/'>Executive Order on Preventing Online Censorship</a>">Recent proposals</span> for reform have called into question what constitutes “good faith” content moderation and attempted to restore liability for platforms that edit content in a manner that demonstrates political bias. But, so far, courts have rejected these interpretations of the law.
As a note, our hypothetical cases do not take into consideration liability for the <span data-tooltip="Read the New America <a href='https://www.newamerica.org/oti/reports/how-internet-platforms-are-combating-disinformation-and-misinformation-age-covid-19/'>report</a>">various content moderation practices</span> used by social media platforms today. Platforms in many cases today seek to influence the impact of posts by, for example, labeling them as disinformation, fact-checking them, or downranking their place in the news feed.
So, the law can’t necessarily require platforms to take any speech down, or keep it up. But, they can shape the immunity of CDA 230 offered to encourage or discourage content moderation.
In the next example, consider whether reform to CDA 230 would actually prevent the following harm.
Your answer indicates that you support CDA 230 immunity for platforms.
You might believe that Twitter has millions of users and can’t investigate every complaint. You could think that platforms should enable individuals to access all kinds of information or that users have the right to say what they want online.
We’re actually not sure whether this hypothetical lawsuit by the Biden campaign would even succeed. The only <span data-tooltip="<a href=https://law.justia.com/cases/federal/district-courts/arizona/azdce/2:2010cv01902/549648/29/''>See Arizona Green Party v. Bennett (D. Ariz. 2010)</a>">case</span> to discuss this provision in Arizona’s election law was not successful. What this tells us is that sometimes, even if CDA 230 immunity does not protect platforms from lawsuits, platforms might still not change their behavior because other laws do not require them to do so.
CDA 230 is a strong legal protection against many types of lawsuits addressing harms that result from activity on a website. But in order to prevail in court, parties also need a strong legal claim that a harm occurred. The Arizona voter suppression law in this case may be one example of a weak legal case that harm occurred. This is why previous reforms to CDA 230, such as SESTA (“Stop Enabling Sex Traffickers Act”) and FOSTA (“Fight Online Sex Trafficking Act”) created an exception to CDA 230 immunity and amended other laws to make it easier to bring lawsuits against interactive computer services to fight child pornography and sex trafficking issues. But, <span data-tooltip="See <a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3362975'>Eric Goldman, The Complicated Story of FOSTA and Section 230, First Amendment Law Review, Vol. 17, page 279, 2019</a>">there is evidence</span> that FOSTA reform hurt sex workers more than its advocates against sex trafficking realized. And, recall that under the First Amendment, the government would have a hard time criminalizing election-related speech.
Your answer indicates that you think CDA 230 immunity should not protect platforms in every instance.
You might believe that CDA 230 should not encourage platforms to flout election law. Because Arizona has a law against voter suppression, voters should be able to hold Twitter accountable if it does not comply.
We’re actually not sure whether this hypothetical lawsuit by the Biden campaign would even succeed. The only <span data-tooltip="<a href=https://law.justia.com/cases/federal/district-courts/arizona/azdce/2:2010cv01902/549648/29/''>See Arizona Green Party v. Bennett (D. Ariz. 2010)</a>">case</span> to discuss this provision in Arizona’s election law was not successful. What this tells us is that sometimes, even if CDA 230 immunity does not protect platforms from lawsuits, platforms might still not change their behavior because other laws do not require them to do so.
CDA 230 is a strong legal protection against many types of lawsuits addressing harms that result from activity on a website. But in order to prevail in court, parties also need a strong legal claim that a harm occurred. The Arizona voter suppression law in this case may be one example of a weak legal case that harm occurred. This is why previous reforms to CDA 230, such as SESTA (“Stop Enabling Sex Traffickers Act”) and FOSTA (“Fight Online Sex Trafficking Act”) created an exception to CDA 230 immunity and amended other laws to make it easier to bring lawsuits against interactive computer services to fight child pornography and sex trafficking issues. But, <span data-tooltip="See <a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3362975'>Eric Goldman, The Complicated Story of FOSTA and Section 230, First Amendment Law Review, Vol. 17, page 279, 2019</a>">there is evidence</span> that FOSTA reform hurt sex workers more than its advocates against sex trafficking realized. And, recall that under the First Amendment, the government would have a hard time criminalizing election-related speech.
Your answer indicates that you oppose CDA 230 immunity for platforms.
You might believe that platforms should ensure users are not misled by content posted on their website. You could think that Twitter has a special obligation to monitor its website for disinformation that could influence an election.
We’re actually not sure whether this hypothetical lawsuit by the Biden campaign would even succeed. The only <span data-tooltip="<a href=https://law.justia.com/cases/federal/district-courts/arizona/azdce/2:2010cv01902/549648/29/''>See Arizona Green Party v. Bennett (D. Ariz. 2010)</a>">case</span> to discuss this provision in Arizona’s election law was not successful. What this tells us is that sometimes, even if CDA 230 immunity does not protect platforms from lawsuits, platforms might still not change their behavior because other laws do not require them to do so.
CDA 230 is a strong legal protection against many types of lawsuits addressing harms that result from activity on a website. But in order to prevail in court, parties also need a strong legal claim that a harm occurred. The Arizona voter suppression law in this case may be one example of a weak legal case that harm occurred. This is why previous reforms to CDA 230, such as SESTA (“Stop Enabling Sex Traffickers Act”) and FOSTA (“Fight Online Sex Trafficking Act”) created an exception to CDA 230 immunity and amended other laws to make it easier to bring lawsuits against interactive computer services to fight child pornography and sex trafficking issues. But, <span data-tooltip="See <a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3362975'>Eric Goldman, The Complicated Story of FOSTA and Section 230, First Amendment Law Review, Vol. 17, page 279, 2019</a>">there is evidence</span> that FOSTA reform hurt sex workers more than its advocates against sex trafficking realized. And, recall that under the First Amendment, the government would have a hard time criminalizing election-related speech.
So, reforming CDA 230 might not prevent harms if there isn’t another law under which plaintiffs have a cause of action. And, the First Amendment might protect harmful speech anyway.
In the next example, consider what should happen when the law does already proscribe certain kinds of commercial speech.
Your answer indicates that you support CDA 230 immunity for platforms.
You might think that Amazon is a massive platform and the company does not have the bandwidth to do quality control for every product sold by third parties on their website.
CDA 230 immunizes platforms from being treated as “publishers” or “information content providers.” Even if platforms create a space, such as a marketplace, that encourages third-party users to post certain kinds of information, courts have rarely found platforms to be publishers of user-provided content.
And CDA 230 shields platforms from civil suits by all parties. This could include individuals who suffered from the website’s activities, as well as public authorities such as the Federal Trade Commission and state attorneys general, who otherwise possess the legal authority to enforce consumer protection laws. One question scholars and commentators have raised is: who should have the power to hold platforms legally responsible for the activity of users on their website—individuals, the government, both, or neither?
Proponents of CDA 230 argue that if any individual harmed by a post can sue a website, platforms will “<span data-tooltip="Fair Housing Council v. Roomates.com (9th Cir. 2008), <a href='http://cdn.ca9.uscourts.gov/datastore/opinions/2008/04/02/0456916.pdf'>opinion available</a>.">face death by ten thousand duck-bites.</span>” It is simply <span data-tooltip="See Professor Eric Goldman’s <a href='https://www.wsj.com/articles/should-amazon-be-responsible-when-its-vendors-products-turn-out-to-be-unsafe-11582898971.'>argument</a>">economically infeasible</span> for Internet companies to defend themselves against multiple lawsuits, while providing a marketplace the size of Amazon. The cost of these suits might bankrupt an up-and-coming startup competitor, <span data-tooltip="For more on CDA 230’s competitive effects, see <a href='https://law.yale.edu/sites/default/files/area/center/isp/documents/new_controversies_in_intermediary_liability_law.pdf.'>Professor Eric Goldman’s essay</a>">entrenching big tech</span>. Opponents claim that CDA 230 immunity was for “publishers” and “speakers,” but that what online marketplaces, such as Amazon, do by <span data-tooltip="For more on this idea, see <a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3532691'>Professor Citron’s research</a>.">selling goods is far different from speech</span>.
Your answer indicates that you believe CDA 230 should not always protect platforms.
You might believe platforms have an obligation to heed the warnings of public officials. If platforms face lawsuits for harmful products on their websites, they might voluntarily decide to clean up their websites more.
CDA 230 immunizes platforms from being treated as “publishers” or “information content providers.” Even if platforms create a space, such as a marketplace, that encourages third-party users to post certain kinds of information, courts have rarely found platforms to be publishers of user-provided content.
And CDA 230 shields platforms from civil suits by all parties. This could include individuals who suffered from the website’s activities, as well as public authorities such as the Federal Trade Commission and state attorneys general, who otherwise possess the legal authority to enforce consumer protection laws. One question scholars and commentators have raised is: who should have the power to hold platforms legally responsible for the activity of users on their website—individuals, the government, both, or neither?
Proponents of CDA 230 argue that if any individual harmed by a post can sue a website, platforms will “<span data-tooltip="Fair Housing Council v. Roomates.com (9th Cir. 2008), <a href='http://cdn.ca9.uscourts.gov/datastore/opinions/2008/04/02/0456916.pdf'>opinion available</a>.">face death by ten thousand duck-bites.</span>” It is simply <span data-tooltip="See Professor Eric Goldman’s <a href='https://www.wsj.com/articles/should-amazon-be-responsible-when-its-vendors-products-turn-out-to-be-unsafe-11582898971.'>argument</a>">economically infeasible</span> for Internet companies to defend themselves against multiple lawsuits, while providing a marketplace the size of Amazon. The cost of these suits might bankrupt an up-and-coming startup competitor, <span data-tooltip="For more on CDA 230’s competitive effects, see <a href='https://law.yale.edu/sites/default/files/area/center/isp/documents/new_controversies_in_intermediary_liability_law.pdf.'>Professor Eric Goldman’s essay</a>">entrenching big tech</span>. Opponents claim that CDA 230 immunity was for “publishers” and “speakers,” but that what online marketplaces, such as Amazon, do by <span data-tooltip="For more on this idea, see <a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3532691'>Professor Citron’s research</a>.">selling goods is far different from speech</span>.
Your answer indicates that you oppose CDA 230 immunity for platforms.
You might believe that Amazon has an obligation to its customers to ensure no one is deceived by products it sells. You could think that platforms should only sell products that they can verify are safe.
CDA 230 immunizes platforms from being treated as “publishers” or “information content providers.” Even if platforms create a space, such as a marketplace, that encourages third-party users to post certain kinds of information, courts have rarely found platforms to be publishers of user-provided content.
And CDA 230 shields platforms from civil suits by all parties. This could include individuals who suffered from the website’s activities, as well as public authorities such as the Federal Trade Commission and state attorneys general, who otherwise possess the legal authority to enforce consumer protection laws. One question scholars and commentators have raised is: who should have the power to hold platforms legally responsible for the activity of users on their website—individuals, the government, both, or neither?
Proponents of CDA 230 argue that if any individual harmed by a post can sue a website, platforms will “<span data-tooltip="Fair Housing Council v. Roomates.com (9th Cir. 2008), <a href='http://cdn.ca9.uscourts.gov/datastore/opinions/2008/04/02/0456916.pdf'>opinion available</a>.">face death by ten thousand duck-bites.</span>” It is simply <span data-tooltip="See Professor Eric Goldman’s <a href='https://www.wsj.com/articles/should-amazon-be-responsible-when-its-vendors-products-turn-out-to-be-unsafe-11582898971.'>argument</a>">economically infeasible</span> for Internet companies to defend themselves against multiple lawsuits, while providing a marketplace the size of Amazon. The cost of these suits might bankrupt an up-and-coming startup competitor, <span data-tooltip="For more on CDA 230’s competitive effects, see <a href='https://law.yale.edu/sites/default/files/area/center/isp/documents/new_controversies_in_intermediary_liability_law.pdf.'>Professor Eric Goldman’s essay</a>">entrenching big tech</span>. Opponents claim that CDA 230 immunity was for “publishers” and “speakers,” but that what online marketplaces, such as Amazon, do by <span data-tooltip="For more on this idea, see <a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3532691'>Professor Citron’s research</a>.">selling goods is far different from speech</span>.
So, theoretically, CDA 230 covers more than just “speech” that occurs on platforms. If there was to be reform, how would the Internet change?
In the example below, consider one proposal to change CDA 230.
Your answer indicates that you support CDA 230 immunity for platforms.
You might believe that Whatsapp should not violate its promise to customers of an end-to-end encrypted communications service. You could think that Whatsapp might not be able to monitor the billions of messages it delivers per day.
The law currently <span data-tooltip='<a href="https://www.lawfareblog.com/herrick-v-grindr-why-section-230-communications-decency-act-must-be-fixed">See Carrie Goldberg’s ideas on Herrick v. Grindr</a>'>does not penalize platforms for defective design</span>. This means that if a platform makes certain design choices, such as end-to-end encryption, and that feature causes harm, the platform is not responsible. Lawmakers are currently debating whether CDA 230 immunity should preclude lawsuits against platforms for content posted in encrypted channels. Proposals to reform to CDA 230 would grant immunity only if platforms take action to protect individuals from harmful content. Privacy advocates support CDA 230, as it exists, because encouraging platforms to take a more active role in supervising content that would otherwise receive little attention would <span data-tooltip='<a href="https://signal.org/blog/earn-it/">Learn why privacy advocates oppose the EARN IT Act</a>'>eliminate encrypted channels</span>. This could increase government surveillance of private communications.
Your answer indicates that you think CDA 230 immunity should be revoked for platforms in some cases.
You could believe that Whatsapp should try to preserve individuals’ privacy and that Whatsapp should be allowed to encrypt messages. That said, you might want it to still be accountable when those messages cause harm.
The law currently <span data-tooltip='<a href="https://www.lawfareblog.com/herrick-v-grindr-why-section-230-communications-decency-act-must-be-fixed">See Carrie Goldberg’s ideas on Herrick v. Grindr</a>'>does not penalize platforms for defective design</span>. This means that if a platform makes certain design choices, such as end-to-end encryption, and that feature causes harm, the platform is not responsible. Lawmakers are currently debating whether CDA 230 immunity should preclude lawsuits against platforms for content posted in encrypted channels. Proposals to reform to CDA 230 would grant immunity only if platforms take action to protect individuals from harmful content. Privacy advocates support CDA 230, as it exists, because encouraging platforms to take a more active role in supervising content that would otherwise receive little attention would <span data-tooltip='<a href="https://signal.org/blog/earn-it/">Learn why privacy advocates oppose the EARN IT Act</a>'>eliminate encrypted channels</span>. This could increase government surveillance of private communications.
Your answer indicates that you oppose CDA 230 immunity for platforms.
You might think that Whatsapp should actively monitor messages related to the public health crisis. You could believe that preserving users’ privacy is less important than protecting individuals from health misinformation.
The law currently <span data-tooltip='<a href="https://www.lawfareblog.com/herrick-v-grindr-why-section-230-communications-decency-act-must-be-fixed">See Carrie Goldberg’s ideas on Herrick v. Grindr</a>'>does not penalize platforms for defective design</span>. This means that if a platform makes certain design choices, such as end-to-end encryption, and that feature causes harm, the platform is not responsible. Lawmakers are currently debating whether CDA 230 immunity should preclude lawsuits against platforms for content posted in encrypted channels. Proposals to reform to CDA 230 would grant immunity only if platforms take action to protect individuals from harmful content. Privacy advocates support CDA 230, as it exists, because encouraging platforms to take a more active role in supervising content that would otherwise receive little attention would <span data-tooltip='<a href="https://signal.org/blog/earn-it/">Learn why privacy advocates oppose the EARN IT Act</a>'>eliminate encrypted channels</span>. This could increase government surveillance of private communications.
Do you think CDA 230 is responsible for mis/dis-information on the Internet?
You agree with CDA 230 as it supports content moderation today. <span data-tooltip='<a href="https://www.eff.org/issues/cda230">See the Electronic Frontier Foundation’s discussion of CDA 230</a>'>Many agree</span> that strong immunity for platforms is necessary for the Internet to flourish and innovate as it has since CDA was passed. As the Internet becomes our primary medium for the exchange of ideas, first amendment advocates worry that without CDA 230, platforms will censor more words and ideas expressed online. To learn more about CDA 230 comprehensively, check out <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3306737">this explainer</a>.
You might agree with CDA, but think courts have interpreted too expansively to encourage appropriate content moderation. Maybe platforms should bear some responsibility for spreading harmful misinformation, at least when it is brought to their attention and they fail to act. However, you might instead think that platforms censor content selectively, and CDA 230 should require them to protect more free speech on the Internet. We encourage you to <span data-tooltip='<a href="https://www.theverge.com/21273768/section-230-explained-internet-speech-law-definition-guide-free-moderation">See Casey Newton’s discussion of various proposals for reforming CDA 230</a>'>learn more</span> about the variety of academic and legislative proposals to amend CDA 230.
You think CDA 230 creates the wrong incentives for content moderation and prevents platforms from addressing the <span data-tooltip='<a href="https://www.lawfareblog.com/herrick-v-grindr-why-section-230-communications-decency-act-must-be-fixed">See Carrie Goldberg’s critique of CDA 230</a>'>real social harms</span> that result from misinformation shared by users. Protecting the free speech of platforms in and of itself was not the original intent of CDA. Rather, it was to encourage proactive moderation by platforms. Since then, many courts have misconstrued the “Good Samaritan” law as granting platforms blanket immunity. Whether it is politics or public health, platforms play a vital role in modern life. They should take a more active role in ensuring their users’ well-being.
About WTF is CDA
WTF is CDA is a Berkman Klein Center Assembly Student Fellowship project by Samuel Clay, Jess Eng, Sahar Kazranian, Sanjana Parikh, and Madeline Salinas. We'd like to thank Jenny Fan for designing the WTF is CDA website. We are immensely grateful for the advice and support from Hilary Ross, Zenzele Best, Jonathan Zittrain, Danielle Citron, Tejas Narechania, Jeff Kosseff, Oumou Ly, John Bowers and the 2020 Assembly Student Fellowship Cohort.
Legal Disclaimer: The information contained in this site is provided for informational purposes only, and should not be construed as legal advice on any subject matter. You should not act or refrain from acting on the basis of any content included in this site without seeking legal or other professional advice. WTF is CDA inherits the Berkman Klein Center privacy policy.
© Samuel Clay, Jess Eng, Sahar Kazranian, Sanjana Parikh, Madeline Salinas.
Meet the team
Supported by