Meta, Twitter, Microsoft and others urge Supreme Court to disallow lawsuits against tech algorithms | CNN Business

0 0
Read Time:4 Minute, 3 Second


Washington
CNN

A wide range of companies, Internet users, academics and even human rights experts defended Big Tech’s liability shield Thursday in a landmark Supreme Court case over YouTube’s algorithms, with some arguing that excluding AI-based recommendation engines from federal legal protections would lead to sweeping changes to the open internet.

The diverse group that intervened in Court ranged from big tech companies like Meta, Twitter and Microsoft to some of Big Tech’s most vocal critics, like Yelp and the Electronic Frontier Foundation. Even Reddit and a collection of volunteer Reddit moderators got involved.

In amicus briefs, the companies, organizations and individuals said the federal law whose reach the Court could narrow the case to, Section 230 of the Communications Decency Act, is vital to the basic function of the web. Section 230 has been used to protect all websites, not just social media platforms, from lawsuits over third-party content.

The question at the heart of the case, Gonzalez v. Google, is whether Google can be sued for recommending pro-ISIS content to users through its YouTube algorithm; the company has argued that section 230 precludes such litigation. But the plaintiffs in the case, the relatives of a person killed in a 2015 ISIS attack in Paris, have argued that YouTube’s recommendation algorithm can be held liable under a US anti-terrorism law.

In their filing, Reddit and Reddit moderators argued that a ruling allowing litigation against tech industry algorithms could lead to future lawsuits even against non-algorithmic forms of recommendation and lawsuits potentially directed against individual Internet users .

“Reddit’s entire platform is built around users ‘recommending’ content for the benefit of others through actions such as upvoting and pinning content,” its submission read. “Not to be confused with the consequences of petitioners’ claim in this case: their theory would dramatically expand the potential for Internet users to be sued for their online interactions.”

Yelp, a longtime Google antagonist, argued that its business depends on giving its users relevant, non-fraudulent reviews, and that a ruling creating liability for recommendation algorithms could break Yelp’s core functions by forcing the effectively to stop curating all the reviews, even. those that can be manipulative or false.

“If Yelp could not analyze and recommend reviews without facing liability, the costs of submitting fraudulent reviews would disappear,” Yelp wrote. “If Yelp were to display all submitted reviews … business owners could submit hundreds of positive reviews for their own business with little effort or risk of penalty.”

Section 230 ensures platforms can moderate content to present the most relevant data to users from the vast amounts of information being added to the Internet every day, Twitter argued.

“It would take an average user approximately 181 million years to download all the data on the web today,” the company wrote.

If the Supreme Court advances a new interpretation of Section 230 that safeguards the platforms’ right to remove content, but excludes protections for their right to recommend content, it would open up broad new questions about what it means to recommend something online, he said. argue Meta. in his presentation.

“If merely displaying third-party content in a user’s feed qualifies as ‘recommendation,’ then many services will face potential liability for virtually all third-party content they host,” wrote Meta, “because nearly all decisions about how to sort, choose, arrange and display third-party content could be interpreted as “recommending” that content.”

A decision ruling that tech platforms can be sued over their recommendation algorithms would put GitHub, the huge online code repository used by millions of developers, at risk, Microsoft said.

“The feed uses algorithms to recommend software to users based on projects they’ve previously worked on or shown interest in,” Microsoft wrote. He added that for “a platform with 94 million developers, the consequences [of limiting Section 230] are potentially devastating to the world’s digital infrastructure.”

Microsoft’s search engine, Bing, and its social network, LinkedIn, also enjoy algorithmic protections under Section 230, the company said.

According to New York University’s Stern Center for Business and Human Rights, it is virtually impossible to design a rule that singles out algorithmic recommendation as a significant category for liability, and that it could even “result in the loss or the obfuscation of a massive amount of valuable speech.” ”, especially the speech belonging to marginalized or minority groups.

“Websites use ‘targeted recommendations’ because these recommendations make their platforms usable and useful,” the NYU filing said. “Without an accountability shield for recommendations, platforms will remove large categories of third-party content, remove all third-party content, or abandon their efforts to make the vast amount of user content accessible on their platforms. In any In these situations, the valuable freedom of expression will disappear, either because it is eliminated or because it is hidden in the middle of a poorly managed information outpouring.”

.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

Your email address will not be published. Required fields are marked *