Supreme Court set to reconsider key principles of online speech

The cases could significantly affect the power and responsibilities of social media platforms.

For years, giant social networks like Facebook, Twitter and Instagram have operated on two crucial principles.

The first is that platforms have the power to decide what content keep online and what to dismantle, free from government control. The second is that websites cannot be held legally responsible for most of what their users post online, protecting companies from lawsuits for defamatory language, extremist content, and actual harm related to their platforms.

< p class="css-at9mc1 evys1bk0">Now the Supreme Court is set to reconsider these rules, which could lead to the most significant reset of doctrines governing online speech since US authorities and courts have decided to apply few regulations to the Web in the 1990s.

On Friday, the Supreme Court is expected to discuss whether to hear two cases that challenge the laws of the Texas and Florida prohibiting online platforms from removing certain political content. Next month, the court is due to hear a case that challenges Section 230, a 1996 law that shields platforms from liability for content posted by their users.

The cases could potentially alter the hands-off legal stance the United States has largely taken toward online speech, potentially upending the businesses of TikTok, Twitter, Snap and Meta, which owns Facebook and Instagram.< /p>

"This is a time when everything could change," said Daphne Keller, a former Google lawyer who runs a program at the University's Cyber ​​Policy Center from Stanford.

The are part of a growing global battle over how to deal with harmful speech online. In recent years, as Facebook and other sites have attracted billions of users and become influential vehicles of communication, the power they wield has come under increasing scrutiny. Questions have arisen about how social media could have unduly affected elections, genocides, wars and political debates.

In some parts of the world, lawmakers moved to curb platforms' influence over speech. Last year, EU lawmakers approved rules that require internet companies to implement procedures to remove illegal content and be more transparent about how they recommend content to people.

In the United States, where freedom of speech is enshrined in the First Amendment, there has been less legislative action. While lawmakers in Washington have questioned the chief executives of tech giants for the past three years about the content they remove, proposals to regulate harmful content have had no success.

Partisanship deepened the impasse. Republicans, some of whom have accused Facebook, Twitter and other sites of censoring them, have pressured the platforms to leave more content. In contrast, Democrats said platforms should remove more content, like health misinformation.

The Supreme Court case challenging the Section 230 of the Communications Decency Act is likely to have many rippl...

Supreme Court set to reconsider key principles of online speech

The cases could significantly affect the power and responsibilities of social media platforms.

For years, giant social networks like Facebook, Twitter and Instagram have operated on two crucial principles.

The first is that platforms have the power to decide what content keep online and what to dismantle, free from government control. The second is that websites cannot be held legally responsible for most of what their users post online, protecting companies from lawsuits for defamatory language, extremist content, and actual harm related to their platforms.

< p class="css-at9mc1 evys1bk0">Now the Supreme Court is set to reconsider these rules, which could lead to the most significant reset of doctrines governing online speech since US authorities and courts have decided to apply few regulations to the Web in the 1990s.

On Friday, the Supreme Court is expected to discuss whether to hear two cases that challenge the laws of the Texas and Florida prohibiting online platforms from removing certain political content. Next month, the court is due to hear a case that challenges Section 230, a 1996 law that shields platforms from liability for content posted by their users.

The cases could potentially alter the hands-off legal stance the United States has largely taken toward online speech, potentially upending the businesses of TikTok, Twitter, Snap and Meta, which owns Facebook and Instagram.< /p>

"This is a time when everything could change," said Daphne Keller, a former Google lawyer who runs a program at the University's Cyber ​​Policy Center from Stanford.

The are part of a growing global battle over how to deal with harmful speech online. In recent years, as Facebook and other sites have attracted billions of users and become influential vehicles of communication, the power they wield has come under increasing scrutiny. Questions have arisen about how social media could have unduly affected elections, genocides, wars and political debates.

In some parts of the world, lawmakers moved to curb platforms' influence over speech. Last year, EU lawmakers approved rules that require internet companies to implement procedures to remove illegal content and be more transparent about how they recommend content to people.

In the United States, where freedom of speech is enshrined in the First Amendment, there has been less legislative action. While lawmakers in Washington have questioned the chief executives of tech giants for the past three years about the content they remove, proposals to regulate harmful content have had no success.

Partisanship deepened the impasse. Republicans, some of whom have accused Facebook, Twitter and other sites of censoring them, have pressured the platforms to leave more content. In contrast, Democrats said platforms should remove more content, like health misinformation.

The Supreme Court case challenging the Section 230 of the Communications Decency Act is likely to have many rippl...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow