Online Speech Is Now An Existential Question For Tech
Kanebridge News
Share Button

Online Speech Is Now An Existential Question For Tech

Content moderation rules used to be a question of taste. Now, they can determine a service’s prospects for survival.

By Christopher Mims
Wed, Feb 24, 2021 3:15amGrey Clock 6 min

Every public communication platform you can name—from Facebook, Twitter and YouTube to Parler, Pinterest and Discord—is wrestling with the same two questions:

How do we make sure we’re not facilitating misinformation, violence, fraud or hate speech?

At the same time, how do we ensure we’re not censoring users?

The more they moderate content, the more criticism they experience from those who think they’re over-moderating. At the same time, any statement on a fresh round of moderation provokes some to point out objectionable content that remains. Like any question of editorial or legal judgment, the results are guaranteed to displease someone, somewhere—including Congress, which this week called the chief executives of Facebook, Google and Twitter to a hearing on March 25 to discuss misinformation on their platforms.

For many services, this has gone beyond a matter of user experience, or growth rates, or even ad revenue. It’s become an existential crisis. While dialling up moderation won’t solve all of a platform’s problems, a look at the current winners and losers suggests that not moderating enough is a recipe for extinction.

Facebook is currently wrestling with whether it will continue its ban of former president Donald Trump. Pew Research says 78% of Republicans opposed the ban, which has contributed to the view of many in Congress that Facebook’s censorship of conservative speech justifies breaking up the company—something a decade of privacy scandals couldn’t do.

Parler, a haven for right-wing users who feel alienated by mainstream social media, was taken down by its cloud service provider, Amazon Web Services, after some of its users live-streamed the riot at the U.S. Capitol on Jan. 6. Amazon cited Parler’s apparent inability to police content that incites violence. While Parler is back online with a new service provider, it’s unclear if it has the infrastructure to serve a large audience.

During the weeks Parler was offline, the company implemented algorithmic filtering for a few content types, including threats and incitement, says a company spokesman. The company also has an automatic filter for “trolling” that detects such content, but it’s up to users whether to turn it on or not. In addition, those who choose to troll on Parler are not penalized in Parler’s algorithms for doing so, “in the spirit of First Amendment,” says the company’s guidelines for enforcement of its content moderation policies. Parler recently fired its CEO, who said he experienced resistance to his vision for the service, including how it should be moderated.

Now, just about every site that hosts user-generated content is carefully weighing the costs and benefits of updating their content moderation systems, using a mix of human professionals, algorithms and users. Some are even building rules into their services to pre-empt the need for increasingly costly moderation.

The saga of gaming-focused messaging app Discord is instructive: In 2018, the service, which is aimed at children and young adults, was one of those used to plan the Charlottesville riots. A year later, the site was still taking what appeared to be a deliberately laissez-faire approach to content moderation.

By this January, however, spurred by reports of hate speech and lurking child predators, Discord had done a complete 180. It now has a team of machine-learning engineers building systems to scan the service for unacceptable uses, and has assigned 15% of its overall staff to trust and safety issues.

This newfound attention to content moderation helped keep Discord away from the controversy surrounding the Capitol riot, and caused it to briefly ban a chat group associated with WallStreetBets during the GameStop stock runup. Discord’s valuation doubled to $7 billion over roughly the same period, a validation that investors have confidence in its moderation strategy.

The prevalence problem

The challenge successful platforms face is moderating content “at scale,” across millions or billions of pieces of shared content.

Before any action can be taken, services must decide what should be taken down, an often slow and deliberative process.

Imagine, for example, that a grass-roots movement gains momentum in a country, and begins espousing extreme and potentially dangerous ideas on social media. While some language might be caught by algorithms immediately, a decision about whether discussion of a particular movement, like QAnon, should be banned completely, could take months on a service such as YouTube, says a Google spokesman.

One reason it can take so long is the global nature of these platforms. Google’s policy team might consult with experts in order to consider regional sensitivities before making a decision. After a policy decision is made, the platform has to train AI and write rules for human moderators to enforce it—then make sure both are carrying out the policies as intended, he adds.

While AI systems can be trained to catch individual pieces of problematic content, they’re often blind to the broader meaning of a body of posts, says Tracy Chou, founder of content-moderation startup Block Party and former tech lead at Pinterest.

Take the case of the “Stop the Steal” protest, which led to the deadly attack on the U.S. Capitol. Individual messages used to plan the attack, like “Let’s meet at location X,” would probably look innocent to a machine-learning system, says Ms Chou, but “the context is what’s key.” Facebook banned all content mentioning “Stop the Steal” after the riot.

Even after Facebook has identified a particular type of content as harmful, why does it seem constitutionally unable to keep it off its platform?

It’s the “prevalence problem.” On a truly gigantic service, even if only a tiny fraction of content is problematic, it can still reach millions of people. Facebook has started publishing a quarterly report on its community standards enforcement. During the last quarter of 2020, Facebook says users saw seven or eight pieces of hate speech out of every 10,000 views of content. That’s down from 10 or 11 pieces the previous quarter. The company said it will begin allowing third-party audits of these claims this year.

While Facebook has been leaning heavily on AI to moderate content, especially during the pandemic, it currently has about 15,000 human moderators. And since every new moderator comes with a fixed additional cost, the company has been seeking more efficient ways for its AI and existing humans to work together.

In the past, human moderators reviewed content flagged by machine learning algorithms in more or less chronological order. Content is now sorted by a number of factors, including how quickly it’s spreading on the site, says a Facebook spokesman. If the goal is to reduce the number of times people see harmful content, the most viral stuff should be top priority.

A content moderator in every pot

Companies that aren’t Facebook or Google often lack the resources to field their own teams of moderators and machine-learning engineers. They have to consider what’s within their budget, which includes outsourcing the technical parts of content moderation to companies such as San Francisco-based startup Spectrum Labs.

Through its cloud-based service, Spectrum Labs shares insights it gathers from any one of its clients with all of them—which include Pinterest and Riot Games, maker of League of Legends—in order to filter everything from bad words and human trafficking to hate speech and harassment, says CEO Justin Davis.

Mr Davis says Spectrum Labs doesn’t say what clients should and shouldn’t ban. Beyond illegal content, every company decides for itself what it deems acceptable, he adds.

Pinterest, for example, has a mission rooted in “inspiration,” and this helps it take a clear stance in prohibiting harmful or objectionable content that violates its policies and doesn’t fit its mission, says a company spokeswoman.

Services are also attempting to reduce the content-moderation load by reducing the incentives or opportunity for bad behaviour. Pinterest, for example, has from its earliest days minimized the size and significance of comments, says Ms Chou, the former Pinterest engineer, in part by putting them in a smaller typeface and making them harder to find. This made comments less appealing to trolls and spammers, she adds.

The dating app Bumble only allows women to reach out to men. Flipping the script of a typical dating app has arguably made Bumble more welcoming for women, says Mr Davis, of Spectrum Labs. Bumble has other features designed to pre-emptively reduce or eliminate harassment, says Chief Product Officer Miles Norris, including a “super block” feature that builds a comprehensive digital dossier on banned users. This means that if, for example, banned users attempt to create a new account with a fresh email address, they can be detected and blocked based on other identifying features.

The ‘supreme court of content’

Facebook CEO Mark Zuckerberg recently described Facebook as something between a newspaper and a telecommunications company. For it to continue being a global town square, it doesn’t have the luxury of narrowly defining the kinds of content and interactions it will allow. For its toughest content moderation decisions, it has created a higher power—a financially independent “oversight board” that includes a retired U.S. federal judge, a former prime minister of Denmark and a Nobel Peace Prize laureate.

In its first decision, the board overturned four of the five bans Facebook brought before it.

Facebook has said that it intends the decisions made by its “supreme court of content” to become part of how it makes everyday decisions about what to allow on the site. That is, even though the board will make only a handful of decisions a year, these rulings will also apply when the same content is shared in a similar way. Even with that mechanism in place, it’s hard to imagine the board can get to more than a tiny fraction of the types of situations content moderators and their AI assistants must decide every day.

But the oversight board might accomplish the goal of shifting the blame for Facebook’s most momentous moderation decisions. For example, if the board rules to reinstate the account of former President Trump, Facebook could deflect criticism of the decision by noting it was made independent of its own company politics.

Meanwhile, Parler is back up, but it’s still banned from the Apple and Google app stores. Without those essential routes to users—and without web services as reliable as its former provider, Amazon—it seems unlikely that Parler can grow anywhere close to the rate it otherwise might have. It’s not clear yet whether Parler’s new content filtering algorithms will satisfy Google and Apple. How the company balances its enhanced moderation with its stated mission of being a “viewpoint neutral” service will determine whether it grows to be a viable alternative to Twitter and Facebook or remains a shadow of what it could be with such moderation.



MOST POPULAR
11 ACRES ROAD, KELLYVILLE, NSW

This stylish family home combines a classic palette and finishes with a flexible floorplan

35 North Street Windsor

Just 55 minutes from Sydney, make this your creative getaway located in the majestic Hawkesbury region.

Related Stories
Lifestyle
The Uglification of Everything
By Peggy Noonan 26/04/2024
Money
Personal Wardrobe of the Iconic Late Fashion Designer Vivienne Westwood Goes up for Auction
By CASEY FARMER 25/04/2024
Money
Rediscovered John Lennon Guitar Heads to Auction, Expected to Set Records
By Eric Grossman 24/04/2024
The Uglification of Everything

Artistic culture has taken a repulsive turn. It speaks of a society that hates itself, and hates life.

By Peggy Noonan
Fri, Apr 26, 2024 5 min

I wish to protest the current ugliness. I see it as a continuing trend, “the uglification of everything.” It is coming out of our culture with picked-up speed, and from many media silos, and I don’t like it.

You remember the 1999 movie “The Talented Mr. Ripley,” from the Patricia Highsmith novel. It was fabulous—mysteries, murders, a sociopath scheming his way among high-class expats on the Italian Riviera. The laid-back glamour of Jude Law, the Grace Kelly-ness of Gwyneth Paltrow, who looks like a Vogue magazine cover decided to take a stroll through the streets of 1950s Venice, the truly brilliant acting of Matt Damon, who is so well-liked by audiences I’m not sure we notice anymore what a great actor he is. The director, Anthony Minghella, deliberately showed you pretty shiny things while taking you on a journey to a heart of darkness.

There’s a new version, a streaming series from Netflix, called “Ripley.” I turned to it eagerly and watched with puzzlement. It is unrelievedly ugly. Grimy, gloomy, grim. Tom Ripley is now charmless, a pale and watchful slug slithering through ancient rooms. He isn’t bright, eager, endearing, only predatory. No one would want to know him! Which makes the story make no sense. Again, Ripley is a sociopath, but few could tell because he seemed so sweet and easy. In the original movie, Philip Seymour Hoffman has an unforgettable turn as a jazz-loving, prep-schooled, in-crowd snob. In this version that character is mirthless, genderless, hidden. No one would want to know him either. Marge, the Paltrow role in the movie, is ponderous and plain, like a lost 1970s hippie, which undercuts a small part of the tragedy: Why is the lovely woman so in love with a careless idler who loves no one?

The ugliness seemed a deliberate artistic decision, as did the air of constant menace, as if we all know life is never nice.

I go to the No. 1 program on Netflix this week, “Baby Reindeer.” People speak highly of it. It’s about a stalker and is based on a true story, but she’s stalking a comic so this might be fun. Oh dear, no. It is again unrelievedly bleak. Life is low, plain and homely. No one is ever nice or kind; all human conversation is opaque and halting; work colleagues are cruel and loud. Everyone is emotionally incapable and dumb. No one laughs except for the morbidly obese stalker, who cackles madly. The only attractive person is the transgender girlfriend, who has a pretty smile and smiles a lot, but cries a lot too and is vengeful.

Good drama always makes you think. I thought: Do I want to continue living?

I go to the Daily Mail website, once my guilty pleasure. High jinks of the rich and famous, randy royals, fast cars and movie stars, models and rock stars caught in the drug bust. It was great! But it seems to have taken a turn and is more about crime, grime, human sadness and degradation—child abuse, mothers drowning their babies, “Man murders family, self.” It is less a portal into life’s mindless, undeserved beauty, than a testimony to its horrors.

I go to the new “Cabaret.” Who doesn’t love “Cabaret”? It is dark, witty, painful, glamorous. The music and lyrics have stood the test of time. The story’s backdrop: The soft decadence of Weimar is being replaced by the hard decadence of Nazism.

It is Kander and Ebb’s masterpiece, revived again and again. And this revival is hideous. It is ugly, bizarre, inartistic, fundamentally stupid. Also obscene but in a purposeless way, without meaning.

I had the distinct feeling the producers take their audience to be distracted dopamine addicts with fractured attention spans and no ability to follow a story. They also seemed to have no faith in the story itself, so they went with endless pyrotechnics. This is “Cabaret” for the empty-headed. Everyone screams. The songs are slowed, because you might need a moment to take it in. Almost everyone on stage is weirdly hunched, like a gargoyle, everyone overacts, and all of it is without art.

On the way in, staffers put stickers on the cameras of your phone, “to protect our intellectual property,” as one said.

It isn’t an easy job to make the widely admired Eddie Redmayne unappealing, but by God they did it. As he’s a producer I guess he did it, too. He takes the stage as the Emcee in a purple leather skirt with a small green cone on his head and appears further on as a clown with a machine gun and a weird goth devil. It is all so childish, so plonkingly empty.

Here is something sad about modern artists: They are held back by a lack of limits.

Bob Fosse, the director of the classic 1972 movie version, got to push against society’s limits and Broadway’s and Hollywood’s prohibitions. He pushed hard against what was pushing him, which caused friction; in the heat of that came art. Directors and writers now have nothing to push against because there are no rules or cultural prohibitions, so there’s no friction, everything is left cold, and the art turns in on itself and becomes merely weird.

Fosse famously loved women. No one loves women in this show. When we meet Sally Bowles, in the kind of dress a little girl might put on a doll, with heavy leather boots and harsh, garish makeup, the character doesn’t flirt, doesn’t seduce or charm. She barks and screams, angrily.

Really it is harrowing. At one point Mr. Redmayne dances with a toilet plunger, and a loaf of Italian bread is inserted and removed from his anal cavity. I mentioned this to my friend, who asked if I saw the dancer in the corner masturbating with a copy of what appeared to be “Mein Kampf.”

That’s what I call intellectual property!

In previous iterations the Kit Kat Club was a hypocrisy-free zone, a place of no boundaries, until the bad guys came and it wasn’t. I’m sure the director and producers met in the planning stage and used words like “breakthrough” and “a ‘Cabaret’ for today,” and “we don’t hide the coming cruelty.” But they do hide it by making everything, beginning to end, lifeless and grotesque. No innocence is traduced because no innocence exists.

How could a show be so frantic and outlandish and still be so tedious? It’s almost an achievement.

And for all that there is something smug about it, as if they’re looking down from some great, unearned height.

I left thinking, as I often do now on seeing something made ugly: This is what purgatory is going to be like. And then, no, this is what hell is going to be like—the cackling stalker, the pale sociopath, Eddie Redmayne dancing with a plunger.

Why does it all bother me?

Because even though it isn’t new, uglification is rising and spreading as an artistic attitude, and it can’t be good for us. Because it speaks of self-hatred, and a society that hates itself, and hates life, won’t last. Because it gives those who are young nothing to love and feel soft about. Because we need beauty to keep our morale up.

Because life isn’t merde, in spite of what our entertainment geniuses say.

 

MOST POPULAR
35 North Street Windsor

Just 55 minutes from Sydney, make this your creative getaway located in the majestic Hawkesbury region.

11 ACRES ROAD, KELLYVILLE, NSW

This stylish family home combines a classic palette and finishes with a flexible floorplan

Related Stories
Lifestyle
Crystal Consults and Tarot Readings: Energy Healers Become the Go-To Home-Repair Pro
By JESSICA FLINT 13/12/2023
Money
Crypto Lender Genesis Prepares to Liquidate Without Deal With Parent Company
By Akiko Matsuda 26/10/2023
Lifestyle
How Generative AI Will Change the Way You Use the Web, From Search to Shopping
By Sarah E. Needleman 18/10/2023
0
    Your Cart
    Your cart is emptyReturn to Shop