These 26 words “created the Internet.” Now the Supreme Court can come for them CNN Business
Washington
CNN
—
Congress, the White House and now the US Supreme Court are turning their attention to a federal law that has long served as a legal shield for online platforms.
This week, the Supreme Court will hear oral arguments in two landmark cases dealing with online speech and content moderation. At the heart of the arguments is “Section 230,” a federal law that has been heavily criticized by both Republicans and Democrats on various grounds, but that tech companies and digital rights groups have defended as vital to the operation of Internet.
The tech companies involved in the litigation have cited the 27-year-old statute as part of an argument why they shouldn’t have to face lawsuits alleging they provided substantial assistance to terrorist acts by hosting or algorithmically recommending terrorist content.
A set of rulings against the tech industry could significantly reduce Section 230 and its legal protections for websites and social media companies. If this happens, the Court’s decisions could expose online platforms to a series of new demands on how they present content to users. This outcome would represent the most consequential limitations ever placed on a legal shield that predates today’s biggest social media platforms and has allowed them to cut many content-related lawsuits out of the blue.
And there could be more: The Supreme Court is still mulling whether to hear several additional cases with implications for Section 230, while members of Congress have expressed renewed enthusiasm for rolling back the law’s protections for websites , and President Joe Biden has called for the same. in a recent post.
Here’s everything you need to know about Section 230, the law dubbed “the 26 words that created the Internet.”
Passed in 1996 in the early days of the World Wide Web, Section 230 of the Communications Decency Act was intended to encourage startups and entrepreneurs. The text of the legislation recognized that the Internet was in its infancy and risked being stifled if website owners could be sued for things other people posted.
One of the law’s architects, Sen. Ron Wyden, D-Oregon, has said that without Section 230, “all online media would face an onslaught of bad faith lawsuits and lobbying campaigns from the powerful” seeking silence them
He also said Section 230 directly empowers websites to remove content they believe is objectionable by creating a “Good Samaritan” safe harbor — under Section 230, websites enjoy immunity from moderating the content the way they see fit, not according to the preferences of others, although the federal government can still sue the platforms for violating criminal or intellectual property laws.
Contrary to what some politicians have claimed, the protections of Section 230 do not depend on a platform that is politically or ideologically neutral. The law also does not require a website to be classified as a publisher to “qualify” for liability protection. Apart from meeting the definition of “interactive computer service”, websites do not need to do anything to get the benefits of Section 230 – they apply automatically.
The central provision of the law holds that websites (and their users) cannot legally be treated as publishers or speakers of other people’s content. In plain English, this means that any legal liability linked to the posting of certain content rests with the person or entity that created it, not the platforms on which the content is shared or the users who re-share it.
Section 230’s seemingly plain language belies its broad impact. Courts have repeatedly accepted Section 230 as a defense to claims for defamation, negligence, and other allegations. In the past, it protected AOL, Craigslist, Google and Yahoo, creating a body of law so broad and influential that it is considered a pillar of today’s Internet.
“The free and open Internet as we know it could not exist without Section 230,” wrote the Electronic Frontier Foundation, a digital rights group. “Important court decisions on Section 230 have held that users and services cannot be sued for forwarding email, hosting online reviews, or sharing photos or videos that others find objectionable. It also helps resolve lawsuits quickly which have no legal basis”.
In recent years, however, critics of Section 230 have increasingly questioned the law’s reach and proposed restrictions on the circumstances in which websites can invoke the legal shield.
For years, much of the criticism of Section 230 has come from conservatives who say the law allows social media platforms to suppress right-wing views for political reasons.
By safeguarding platforms’ freedom to moderate content as they see fit, Section 230 protects websites from lawsuits that may arise from this type of viewpoint-based content moderation, even though network companies social media have said that they do not make content decisions based on ideology but rather. about violations of its policies.
The Trump administration sought to turn some of these criticisms into concrete policy that would have had significant consequences, had it succeeded. For example, in 2020, the Department of Justice released a legislative proposal for changes to Section 230 that would create an eligibility test for websites seeking the law’s protections. That same year, the White House issued an executive order asking the Federal Communications Commission to interpret Section 230 more narrowly.
The executive order faced a number of legal and procedural problems, including the fact that the FCC is not part of the judiciary; which does not regulate social networks or content moderation decisions; and that it is an independent agency that, by law, does not take direction from the White House.
While Trump-era efforts to shrink Section 230 never came to fruition, conservatives are still looking for opportunities to do so. And they are not alone. Since 2016, when the role of social media platforms in spreading Russian election disinformation opened a national dialogue about companies’ handling of toxic content, Democrats have increasingly criticized Section 230.
By safeguarding platforms’ freedom to moderate content as they see fit, Democrats have said, Section 230 has allowed websites to escape liability for hosting hate speech and misinformation that others have recognized as objectionable, but that social media companies cannot or will not remove. themselves.
The result is a bipartisan hatred of Section 230, even if the two sides can’t agree on why Section 230 is flawed or what policies might adequately take its place.
“I’d be willing to bet that if we voted on a simple repeal of Section 230, this committee would clear pretty much every vote,” Sen. Sheldon Whitehouse, D-Rhode Island, said at a Senate Judiciary hearing last week. committee “The problem, where we get bogged down, is that we want more than 230. We want to repeal 230 and then have ‘XYZ.’ And we don’t agree on what ‘XYZ’ is.”
The deadlock has driven much of the push to change Section 230 to the courts, most notably the US Supreme Court, which now has the chance this term to dictate how far the law extends.
Tech critics have called for more legal exposure and accountability. “The massive social media industry has grown largely shielded from the courts and the normal development of a body of law. It is highly unusual for a global industry that wields astonishing influence to be shielded from judicial inquiry,” wrote the Anti-Defamation League in a writ of the Supreme Court.
For the tech giants, and even for many of Big Tech’s fiercest competitors, that would be a bad thing, because it undermines what has allowed the Internet to thrive. It would potentially put many websites and users in unintended and abrupt legal jeopardy, they say, and dramatically change how some websites operate to avoid liability.
Social media platform Reddit has argued in a Supreme Court brief that if Section 230 is narrowed so that its protections do not cover a site’s recommendations of content that a user might enjoy, it would “drastically expand the potential for users to ‘Internet to be sued for their online site’. interactions”.
“‘Recommendations’ are what make Reddit a vibrant place,” the company and several volunteer Reddit moderators wrote. “It’s users who vote for and against content, and therefore determine which posts gain prominence and which fade into obscurity.”
People would stop using Reddit and moderators would stop volunteering, according to the document, under a legal regime that “carries a serious risk of being sued for ‘recommending’ a defamatory or otherwise criminal post created by someone else “.
While this week’s oral arguments won’t be the end of the Section 230 debate, the outcome of the cases could lead to hugely significant changes the Internet has never seen before, for better or worse.
.