Read the Beforeitsnews.com story here. Advertise at Before It's News here.
Profile image
Story Views
Now:
Last hour:
Last 24 hours:
Total:

A Year of Content Moderation and Section 230

% of readers think this story is Fact. Add your two cents.


Matthew Feeney and Will Duffield

Many Americans will remember 2020 as the year of the COVID-19 pandemic and the Trump v. Biden presidential election campaigns. As COVID-19 spread and the election came closer millions of Americans took to online platforms to express their opinions, theories, and stories and to seek information. Platforms were put in the unenviable position of developing content moderation policies related to the pandemic and election season, trying to halt the spread of potentially life‐​threatening medical misinformation and political conspiracy theories. These efforts made “Big Tech” content moderation one of the most discussed legislative issues of the year. President Trump and Joe Biden both called for the repeal of Section 230 of the Communications Decency Act – the most cited law in content moderation discussion – and lawmakers from both sides of the aisle as well as regulators at federal agencies have released Section 230 proposals. This post provides an analysis of what 2020 can teach lawmakers, policy professionals, and regulators about the future of content moderation.

Section 230 of the Communications Decency Act is a critical but widely misunderstood law. Passed in 1996, Section 230 includes two key provisions, the so‐​called “sword” and “shield” of the law. The shield states that interactive computer services are not – with few exceptions – the publisher of content users upload to the interactive computer service. The sword portion of the law states that such services are free to moderate content as they see fit. Although social media platforms are the most discussed institutions in Section 230 debates, interactive computer services also include websites such as Wikipedia and Reddit as well as comments sections such as those found on newspaper websites.

The law allows those who want to restrict spam, pornography, and other legal content from their websites to do so without having to worry that such moderation might make them liable for content uploaded by users. This is an important protection, especially given the scale of user‐​generated content.

Every day, users upload hundreds of thousands of hours of video to YouTube and send hundreds of millions of tweets. Each minute, Facebook users post about half a million comments. YouTube, Twitter, and Facebook may be household names, but they are among the millions of websites people all across the world use to upload and access information.

The vast majority of this information is legal. However, firms big and small make the understandable decision to distance themselves from content they consider harmful and offensive. Values vary, and accordingly Facebook, Pornhub, Wikipedia, as well as every blog with a comments section uses a different content moderation policy.

COVID-19 and the American presidential election have made these value judgments front‐​page news. They have provided much for policy makers to learn.

The Content Moderation State of Exception

Beginning in late January a novel coronavirus quickly spread from Wuhan, China throughout the rest of the world. False information about the virus, and fear of its misleading effects, moved even more quickly, dramatically altering platforms’ approach to content moderation.

Prior to the Covid‐​19 epidemic, misinformation was generally understood as causing only indirect harm. Most of the misinformation discourse concerned political misinformation, which might mislead voters into choosing poor policies or candidates. Outside of “dangerous speech” likely to incite immediate violence, misinformation presented a contingent threat.

However, false information about the coronavirus can cause direct, if not immediate, harm. Social media posts claiming the epidemic was a Deep State hoax or globalist conspiracy might undermine the credibility of policy responses necessary to slow the spread of the virus. Speech that disputed medical knowledge about the disease and its spread might encourage people to gather in groups or forego simple actions like hand washing. The evident harms from such speech would be additional deaths, compared to a world where public health advice was more widely believed.

As such, platforms adopted stricter policies toward COVID-19 misinformation, and accepted more false positives in enforcing them. Given the novel, incompletely understood nature of the threat, this heightened sensitivity was initially widely applauded, and if anything, platforms were encouraged to do more to limit disinformation.

Rise of the Machines

COVID-19 made office life untenable, sending workers home across a wide range of industries. Platform content moderators were not immune from this shift. While some work could be done remotely, privacy concerns precluded full moderation from home, rapidly increasing the use of algorithms in making final moderation decisions.

Handing decision‐​making to algorithms and limiting appeals spurred an increase in takedowns across the board. Given the grave harm associated with this sort of content, potential child safety violations saw the largest increase in removals, though platforms’ newly broadened misinformation policies, COVID-19 skepticism was hit hard as well.

This has been reflected in transparency reports released by the most prominent social media companies. Platforms have reported dramatically higher numbers of takedowns, and without humans in the loop to review appeals, far fewer reinstations. YouTube removed 6,100,000 videos between January and March of this year – between April and June, the number nearly doubled to11,400,000. Platforms have not become more exacting in their standards; they have probably been accepting more mistakes.YouTube’s increases in child safety takedowns don’t necessarily mean they’ve been catching more abusive content. Rather, they may just be removing more clips of poolside family gatherings while trying to tackle material that either sexualizes children or includes nudity. A blog post accompanying the YouTube’s latest transparency report states “When reckoning with greatly reduced human review capacity due to COVID-19, we were forced to make a choice between potential under‐​enforcement or potential over‐​enforcement… Because responsibility is our top priority, we chose the latter — using technology to help with some of the work normally done by reviewers.”

On Facebook, some content types saw a decrease in overall removals compared to past years. Facebook halted is content appeals process, and the effects have been stark. In the first quarter of 2020, the platform restored 121,000 pieces of content that had been removed in error for violating platform prohibitions on graphic violence. In the second quarter of the year, Facebook restored a mere 200 pieces of content.

Institutional Reliance

Without perfect knowledge of the virus and emerging public health best practices, the largest platforms couldn’t simply prohibit COVID-19 misinformation per se. Instead, they prohibited “Denial of global or local health authority recommendations,” explicitly binding the validity of their moderation decisions to the veracity of expert advice.

To the extent that public health officials’ advice was correct, this outsourcing of expertise made it easier for platforms to justify their content moderation decisions. Rather than forming their own in‐​house medical information unit, Facebook could use the CDC and other medical institutions as proxies for medical fact checkers. Unfortunately, early official communications concerning mass mask‐​wearing and asymptomatic transmission were incomplete and incorrect. In late February, U.S. Surgeon General Jerome Adams tweeted “Seriously people‐ STOP BUYING MASKS! They are NOT effective in preventing general public from catching #Coronavirus but if healthcare providers can’t get them to care for sick patients, it puts them and our communities at risk!” As such, platforms at least ostensibly committed themselves to removing true statements about mask wearing as they promoted official mask skepticism. In practice, however, platforms did not remove mask advocacy.

Whether motivated by genuine lack of understanding about the value of even non‐​NIOSH (National Institute for Occupational Safety and Health) approved masks, or a noble lie intended to preserve masks for healthcare workers, early calls to prohibit mask advertisements, echoed by Sens. Warner (D-VA) and Blumenthal (D-CT),were accepted by the largest platforms. As the value of mask wearing became more widely accepted, Facebook nevertheless removed posts organizing the distribution of hand sewn masks for “breaking its guidelines against regulated goods and services.”

Platforms also initially deferred to government social distancing policies in determining whether users could organize mass events on their services. In April, it was reported that Facebook removed events organizing anti‐​lockdown protests after being instructed to do so by state governments, though state officials denied these claims. Regardless of the actual decisionmaker, the removal of anti‐​lockdown protest event pages in Nebraska, California, and New Jersey heightened concerns that content moderation’s opacity could render it a potent vehicle for extralegal state censorship. Responses to platform removals of anti‐​lock down protests illustrated growing dissatisfaction with the COVID-19 content moderation state of exception on the political right.

Increased Expectations and Backlash

As the pandemic continued and the ongoing lockdowns began to inflict a great economic and psychological toll, speech about the pandemic on social media become more explicitly political, and potential responses to the pandemic were understood in increasingly partisan terms. Moderating COVID-19 misinformation increasingly seemed to affect conservatives more than liberals. That platforms, perhaps following the lead of state governments and public health experts, refrained from removing Black Lives Matter protest events in early June as they had anti‐​lockdown protests in April, solidified the right’s perception of political bias.

Others, however, saw platforms’ coronavirus policies as proof that these services can indeed do more to prevent other forms of online harm, and hoped that they would “transfer their vigor for battling coronavirus misinformation to the many other flavors of falsehood.” To some extent, as the pandemic has dragged on and most moderators have stayed at home, this has already happened. Speech of all stripes has been moderated in large part by algorithm for much of this year, without the possibilities for appeal that existed before the pandemic. This has already prompted intense, continuing backlash, turning content moderation into an election issue when it might have otherwise stayed out of the political spotlight.

Section 230 and Political Speech

2020 was not the first year content moderation made headlines, but decisions associated with political speech made Section 230 one of the most discussed issues of the year.

For years, conservative activists and politicians have claimed that Silicon Valley’s content moderation policies disproportionately limits conservative speech. Although such activists and politicians often cite anecdotes, data supporting claims of bias have been notable by their absence. Nonetheless, a belief in “Big Tech” “censorship” has become an article of faith to much of the American conservative policy and political communities.

Editorial and content moderation decisions in the past year have only strengthened belief in Big Tech bias and censorship. In May, Twitter attached labels to three of President Trump’s tweets. Twitter labeled two tweets associated with claims about mail‐​in voting as “potentially misleading.” Twitter also added a label to a tweet that referenced police misconduct protests, claiming that it violated Twitter’s policies governing the glorification of violence.

A few days later, President Trump signed an executive order on “preventing online censorship.” The executive order required – among other things – that the National Telecommunications and Information Administration (NTIA) file a rulemaking petition with the Federal Communications Commission (FCC) to narrow the scope of Section 230’s sword and shield provisions, reading contingency into its protections for platform content moderation . NTIA filed its petition in July, and the FCC accepted public comments on the proposal throughout August. In September, the White House abruptly withdrew the re‐​nomination of Republican FCC commissioner Mike O’Rielly after he criticized the order.

July was the month that Parler, a social media site marketed as governed in line with the First Amendment, experienced a surge in visits. Much of this increase in traffic is no doubt thanks to the fact that well‐​known Trump allies joined the platform. TheDonald.win, a social media site catering to Trump fans also experienced an increased number of visits during this time. The increase correlates with Reddit shutting down the r/​the_​donald forum.

In August, President Trump issued an executive order banning TikTok, a short‐​form mobile video app owned by Chinese company ByteDance. The order, seemingly intended to force a sale of TikTok to a U.S. firm, was motivated by concerns that the Chinese Communist Party could use the app to harvest Americans’ data. Given the broadly unregulated nature of commercial data brokers, any information provided by TikTok could be gained elsewhere. Nevertheless, the app became a focal point for fears of a rising, meddlesome, China. While the order has since been stayed by the courts, it has already cast the Apple App Store and Google Play Store, which would have been used to implement the ban, as tools of American foreign policy, compromising American leadership of an open internet.

On Capitol Hill, members of both political parties and both houses of Congress introduced a range of legislation related to Section 230. Some of these bills sought to address alleged bias in Silicon Valley, such as the unambiguously named Stopping Big Tech’s Censorship Act, introduced by Sen. Kelly Loeffler (R-GA) and Paul Gosar’s (R-AZ) Stop the Censorship Act.

Other Section 230 legislation sought to tackle advertisements, Silicon Valley’s perceived lack of transparency, and children sexual exploitation material (CSAM). These bills were introduced amid an increasingly ferocious election campaign, in which President Trump and former Senator Joe Biden expressed support for repealing Section 230.

However, despite bipartisan complaints about Section 230 no bill looked likely to make it out of Congress. Broadly speaking, Republican lawmakers are convinced major social media sites take down too much content, while their Democratic colleagues believe they leave too much up. Writing bills that satisfy both of these concerns at once seems impossible.

One exception to the lack of Section 230 bipartisanship is the EARN IT Act, introduced by Sens. Lindsey Graham (R-SC) and Richard Blumenthal (D-CT). The bill would make Section 230 protections contingent on firms adhering to a set of practices designed to tackle the spread of CSAM. These practices would be determined by, among other officials, the attorney general. Civil libertarians were quick to point out that the EARN IT Act posed a threat to encryption, putting the safety and privacy of millions of Americans at risk. The Senate Judiciary Committee voted to advance the EARN IT Act in July. In September, Reps. Sylvia Garcia (D-TX) and Ann Wagner (R-MO) introduced a House version of the EARN IT Act.

In recent weeks, the Section 230 has been one of the most discussed topics in Washington D.C. thanks to Twitter and Facebook limiting the spread of an article published by the New York Post. The articles were based on documents allegedly linked to Hunter Biden, Joe Biden’s son, and his business dealings with Ukraine and Chinese energy firms. According to Twitter, the company limited sharing of links because the Post stories violated its policy on hacked material. Facebook claimed that the Post’s reporting violated its misinformation policy.

Conservative journalists, activists, and politicians were quick to call foul, alleging that Facebook’s and Twitter’s decisions was a blatant act of election interference in favor of Biden. The Senate Judiciary Committees Republican members voted to subpoena Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey, who were both already set to testify before the Senate Commerce Committee alongside Google CEO Sundar Pichai.

At last week’s Senate Commerce Committee hearing Republican lawmakers criticized Dorsey and Zuckerberg for Twitter’s and Facebook’s content moderation policies.

Although Facebook’s and Twitter’s treatment of the New York Post’s reporting was legal, it ignited a fury on the political right that is unlikely to fade after this week’s election.

Lessons Learned

Platforms’ responses to COVID-19 illustrate that, while platforms can do more to address harmful content, increased vigilance inescapably brings more false positives and eventual backlash. Automation is not a panacea, and policies accepted in response to misleading medical advice cannot be easily applied to other forms of disinformation. While platforms may effectively declare a state of exception, maintaining it for any length of time invites mounting political pressure.

Platform independence is costly, but necessary if they are to avoid the political fray. Reliance on external expertise can improve the alacrity and legitimacy of platform responses to novel crisis, but risks enforcing error at scale if platform adopt unfounded expert advice. Indeed, if the experts turn out to be wrong, as they were about masks and asymptomatic transmission, the knowledge that platforms officially endorsed misinformation may do lasting harm to the legitimacy of their governance.

Additionally, the past year has shown that the contemporary platform Internet is not so static as its critics would have us believe. TikTok arose as a real competitor to major American platforms, demonstrating that a novel product can overcome the barriers presented by network effects and bigness, while Parler demonstrated that content moderation concerns can create viable market niches.

Whatever the results of tomorrow’s election, social media content moderation will continue to be a dominant feature of legislative debates. If Trump loses, there is a good chance he and many of his supporters will portray the loss as a “Big Tech” coup, a rhetorical strategy that will likely have lasting political effects. While a Biden administration is unlikely to be as outwardly aggressive as the Trump administration on Section 230 revisions, we should nonetheless expect a Biden administration to urge Democrats on Capitol Hill to pursue Section 230 legislation that tackles misinformation and election interference, perhaps by requiring “reasonable” platform policies concerning such speech. If Trump wins, we should expect for him and his allies on Capitol Hill and federal agencies to redouble their efforts to amend Section 230 to impose must‐​carry requirements.

In either event, Section 230 will be much‐​discussed in the coming weeks and months. We can only hope that the lessons of the last year help inform such discussions. Perhaps the most important of which is that the platforms for all their problems and mistakes still provided a way for many people to express their political and other views. That is valuable if taken for granted.


Source: https://www.cato.org/


Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world.

Anyone can join.
Anyone can contribute.
Anyone can become informed about their world.

"United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.

Please Help Support BeforeitsNews by trying our Natural Health Products below!


Order by Phone at 888-809-8385 or online at https://mitocopper.com M - F 9am to 5pm EST

Order by Phone at 866-388-7003 or online at https://www.herbanomic.com M - F 9am to 5pm EST

Order by Phone at 866-388-7003 or online at https://www.herbanomics.com M - F 9am to 5pm EST


Humic & Fulvic Trace Minerals Complex - Nature's most important supplement! Vivid Dreams again!

HNEX HydroNano EXtracellular Water - Improve immune system health and reduce inflammation.

Ultimate Clinical Potency Curcumin - Natural pain relief, reduce inflammation and so much more.

MitoCopper - Bioavailable Copper destroys pathogens and gives you more energy. (See Blood Video)

Oxy Powder - Natural Colon Cleanser!  Cleans out toxic buildup with oxygen!

Nascent Iodine - Promotes detoxification, mental focus and thyroid health.

Smart Meter Cover -  Reduces Smart Meter radiation by 96%! (See Video).

Report abuse

    Comments

    Your Comments
    Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

    MOST RECENT
    Load more ...

    SignUp

    Login

    Newsletter

    Email this story
    Email this story

    If you really want to ban this commenter, please write down the reason:

    If you really want to disable all recommended stories, click on OK button. After that, you will be redirect to your options page.