Read the Beforeitsnews.com story here. Advertise at Before It's News here.
Profile image
Story Views
Now:
Last hour:
Last 24 hours:
Total:

Bleep Offers Customizable Content Moderation

% of readers think this story is Fact. Add your two cents.


Will Duffield

Content moderation is often seen in binary terms. Platforms are either seen to moderate too little, prompting demands for government‐​imposed responsibility, or too much, inspiring proposals for state‐​mandated restraint. Any universally applicable policy or standard cannot satisfy both demands. Yet some tensions can be resolved by simply giving users more control over content moderation.

At the Game Developers Conference in mid‐​March, Intel debuted an audio moderation tool called Bleep. Bleep will allow users to filter video game audio in real time, excluding potentially offensive speech. In most online games, pseudonymous players are drafted into one‐​off teams. While real‐​time audio communication is vital for teamwork, it’s also a potent vector for harassment. Audio conversation is spread across many different matches, and happens in real time, making effective top‐​down moderation difficult. Game moderation usually relies on either heavy‐​handed word filters, or reporting, monitoring, and post‐​hoc account removal. Instead, Bleep runs on individual players’ computers. Rather than attempting to identify and remove offensive speech universally, Bleep filters speech at the point of potential offense. Because standards of offense differ between persons, user‐​managed tools more effectively avoid false positives and false negatives where they matter most.

Despite its potential (beyond a short, neutral piece in PC Magazine) Bleep was mostly ignored by the gaming press until early April, when a pseudonymous comic writer mocked it on twitter, garnering 11k retweets. This was followed by a deluge of negative coverage.

The bulk of the criticism centers on Bleep’s provision of separate, user‐​adjustable sliders for different varieties of offensive speech. Users can choose to hear “all, most, some, or none” of what intel’s algorithm, in real time, determines is “swearing,” “misogyny,” “white nationalism,” or various other uglies. Critics mocked the idea that users would want some mixture of offensive speech, or that their preferred mixture might change. This criticism ignores the realities of language use and the fallibility of algorithmic moderation.

Luke Pluckett of Kotaku was particularly critical, deeming it “ghastly that something like this ever left a whiteboard,” before declaring that “Hateful speech is something that needs to be educated and fought, not toggled on a settings screen.” I will assume that Pluckett means “educated against,” but even this misses the mark. There is no universal standard of hatefulness or offense that can be agreed upon, let alone educated against or fought. In a liberal society, there shouldn’t be. Diversity includes many competing understandings of the sacred and profane. In the absence of tools like Bleep, games employ blunt server‐​side filtering, moving the toggles out of players’ view, or push audio chat off‐​platform, making it someone else’s problem.

Server‐​side filtering doesn’t eliminate the slider – gave developers must still determine what constitutes hate speech and decide how much of it to allow. However, they must make a single choice for everyone and commit to enforcing it with little appreciation for nuance. Some see this as a benefit, or the only way to properly address hate. One critic writes, “If Intel wanted to address the concerns with hate speech, then why have any options at all?”

This criticism blurs two categories of potential harm together. If hate speech is a problem because it ruins the experience of other players, causing them stress and alarm when they’re trying to enjoy a game, a user‐​operated filter is a good response. Users can tune out categories of speech they find offensive and don’t have to worry about muting particular users while playing a game. It may also reduce harassment by reducing the potential payoffs. If offensive language if used to intentionally upset other players, the knowledge that insults will not be received may prevent their utterance.

However, for those worried about the potential downstream effects of hate speech, Bleep fails to address the problem entirely. If it is dangerous to normalize offensive language in private, or if hate speech causes hateful violence, it must be prohibited and policed everywhere. This totalizing approach to moderation expects game developers to do what the law cannot, preventing radicalization through rhetoric, or the expression of vulgar preferences.

In most cases, content moderation in gaming addresses everyday invective, the language of losing sports fans, not extremist instruction. Harassment is enough of a problem to justify some universal restrictions, but they have real costs, and are implemented to improve the user experience, not stamp out hate. Language is subjective. The understood meanings of words turn on the relationship between speaker and listener. “Queer” may be an expression of identity in some circles but an insult in others.

To the extent that gaming platforms have implemented universal prohibitions and filters, they have accepted a certain amount of collateral damage. Universal antiharassment measures often fall hardest on minority users. Most voice recognition systems are worse at recognizing African American Vernacular English, leading to higher false positive rates. Even well‐​designed filtering systems will make mistakes when constantly parsing millions of utterances. Enforcement becomes a matter of probability; moderators make tradeoffs between false positives and false negatives. By relying on a user adjustable algorithm to filter received speech, Bleep lowers the stakes of failure in moderating harassment. Users may respond to mistakes individually, rather than trying to change a universal standard. Intel’s Kim Pallister explains;

If users want to avoid ever encountering swearing, and are willing to accept false positives, they can set the sliders to their most restrictive position. If they are more willing to risk mistakes in the other direction, hearing the occasional curse, but never missing a fudge recipe, they could adjust it to a more moderate setting. For subjective, individualized, highly contextual problems like harassment or offense, rather than clearer questions of illegal speech, users should pick the probability of enforcement. Many users may end up preferring one setting for play with friends and another for larger or more public games. Users can tailor moderation to context far more effectively than distant moderators.

WSJ columnist Andy Kessler offers a different complaint, accusing Bleep of having the suppressive effects other critics demand. He casts pervasive, real time speech filtering as the end state of cancel culture. Conjuring images of Bleep for keyboards and suggesting that the software might have helped former Miami Heat player Meyers Leonard avoid using antisemitic language while gaming, Kessler conflates filtering received speech with quashing output. By allowing players to filter what they hear, Bleep avoids justifying exactly the sort of output filtering Kessler fears.

Kessler treats the desire to avoid stress and alarm while playing video games, the user experience justification for moderation, as weakness. He writes;

“Call” again mistakes Bleep’s effect. Bleep allows players to avoid hearing someone call them a “dog‐​faced pony soldier” (or, in most cases, something much less printable). It does not prevent the insult from being uttered, or from being heard by others with less restrictive filters. Some may prefer to play with harsh language. However, they shouldn’t be able to impose this preference on other players any more than others should be able to censor them.

Both criticisms of Bleep attempt to collapse one category of moderation into another, instrumentalizing the provision of a useful tool in the pursuit of utopian political goals. A good antiharassment tool need not stamp out hate everywhere. On the other hand, pseudonymous gamers cannot simply be expected to “think before they speak, using their own biological Bleep code.” Because offense and harassment are subjective, local, user‐​defined filtering is the best method for preventing harassment from ruining players’ experience without subjecting them to either paternalistic linguistic guard‐​rails.

While it’s easy to dismiss these complaints as empty chirping by writers who find condemnation easier than nuanced examination, the rush to condemn risks strangling a valuable, agency‐​respecting antiharassment tool in the crib. If Intel takes this criticism to heart, and reduces users’ choice over filtering options, it will harm both players who hope to escape harassing speech and those stifled by one‐​size‐​fits‐​all filtering.


Source: https://www.cato.org/blog/bleep-offers-customizable-content-moderation


Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world.

Anyone can join.
Anyone can contribute.
Anyone can become informed about their world.

"United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.

Please Help Support BeforeitsNews by trying our Natural Health Products below!


Order by Phone at 888-809-8385 or online at https://mitocopper.com M - F 9am to 5pm EST

Order by Phone at 866-388-7003 or online at https://www.herbanomic.com M - F 9am to 5pm EST

Order by Phone at 866-388-7003 or online at https://www.herbanomics.com M - F 9am to 5pm EST


Humic & Fulvic Trace Minerals Complex - Nature's most important supplement! Vivid Dreams again!

HNEX HydroNano EXtracellular Water - Improve immune system health and reduce inflammation.

Ultimate Clinical Potency Curcumin - Natural pain relief, reduce inflammation and so much more.

MitoCopper - Bioavailable Copper destroys pathogens and gives you more energy. (See Blood Video)

Oxy Powder - Natural Colon Cleanser!  Cleans out toxic buildup with oxygen!

Nascent Iodine - Promotes detoxification, mental focus and thyroid health.

Smart Meter Cover -  Reduces Smart Meter radiation by 96%! (See Video).

Report abuse

    Comments

    Your Comments
    Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

    MOST RECENT
    Load more ...

    SignUp

    Login

    Newsletter

    Email this story
    Email this story

    If you really want to ban this commenter, please write down the reason:

    If you really want to disable all recommended stories, click on OK button. After that, you will be redirect to your options page.