Google thinks a US Supreme Court case could radically change the internet

>

Google has warned that a ruling against it in a pending Supreme Court (SC) case could jeopardize the entire internet by removing an important defense against lawsuits over content moderation decisions involving artificial intelligence (AI) .

Section 230 of the Communications Decency Act of 1996 (opens in new tab) currently provides a general “accountability shield” regarding how companies moderate content on their platforms.

However, as reported by CNN (opens in new tab)Google wrote in a submit legally (opens in new tab) that, should the SC decide in favor of the plaintiff in the Gonzalez v. Google case, which revolves around YouTube’s algorithms recommending pro-ISIS content to users, the internet could be flooded with dangerous, offensive and extremist content .

Automation in moderation

As part of an almost 27-year-old law, already focused on reform by US President Joe Biden (opens in new tab)Section 230 is not equipped to legislate on modern developments such as artificially intelligent algorithms, which is where the trouble begins.

The core of Google’s argument is that the Internet has grown so much since 1996 that integrating artificial intelligence into content moderation solutions has become a necessity. “Virtually no modern website would function if users had to search the content themselves,” the application reads.

“A plethora of content” means tech companies have to use algorithms to present it to users in a manageable way, from search engine results to flight deals and job postings on job posting websites.

Google also noted that under existing law, while technology companies simply refuse to moderate their platforms is a perfectly legal way to avoid liability, it risks turning the Internet into a “virtual cesspool”.

The tech giant also pointed out that YouTube’s Community Guidelines expressly reject terrorism, adult content, violence and “other dangerous or objectionable content” and that it is constantly tweaking its algorithms to pre-emptively block banned content.

It also claimed that “approximately” 95% of videos violating YouTube’s “violent extremism policy” were automatically detected by the second quarter of 2022.

Nevertheless, the petitioners in the case claim that YouTube has failed to remove all Isis-related content, helping “the rise of ISIS” to prominence.

In an effort to further dispel any liability on this issue, Google responded by saying that YouTube’s algorithms recommend content to users based on similarities between a piece of content and the content a user is already interested in.

This is a complicated matter, and while it’s easy to subscribe to the idea that the internet has grown too big for manual moderation, it’s equally compelling to suggest that companies should be held accountable when their automated solutions fall short.

After all, if even tech giants can’t guarantee what’s on their website, users of filters and parental supervision cannot be sure they will take effective action to block objectionable content.

Related Post