How social media moderation works

This is a transcript of the following podcast.

There has been a lot of debate regarding the imposition of moderation on social media and whether that constitutes censorship and violations of the right to free speech. That argument is specious at best. Offending content on commercial social media is removed only when it negates profitability.

Most humans moderate their speech. Sometimes we think about impulsively speaking in reaction to something that incites strong emotions. People who do not react have what is called, “self-control”. Some people don’t have that filter (I’m looking at you, Elon) and blurt out offensive, nonfactual, or dishonest responses. Sometimes they aren’t atypical, they are just selfish people without manners (still looking at you Elon). Moderation of your speech is just a civilized attitude.

Profit motive

When it comes to social media, however, especially for-profit social media, the primary factor is profit. That has been the guiding principle of all social media moderation.

I know quite a bit about how companies like Facebook and X have moderated content in the past 10 years. For about five years, I rented a home to employees of the staffing company contracted to Facebook to moderate pornography and violence on the site, and we had long talks about how it works. Most of the moderators for Facebook were contractors. Facebook had internal moderators working on some areas, but with Zuckerberg’s announcement may mean that they are also out of work.

But let’s talk about what the moderators used to do.

Moderation is not censorship

In the first place, they don’t decide anything. Facebook trains them on internal standards that changed constantly depending on how traffic was flowing and what was considered illegal content (like child abuse). Based on the guidelines, the moderators would flag certain content for the corporation and the corporation would then decide whether it was to be removed or stay in place. The restrictions included violence and criminal behavior, including self-harm and suicide content, hate speech targeting protected characteristics, spam, misinformation and disinformation, deepfakes), impersonation, and restrictions on cryptocurrency/financial product advertising.

But even if the content did violate the corporate standards, that didn’t mean it would be removed. Commercial social media platforms (that is, those who make money with advertising) need large amounts of engagement in order to attract advertisers. They need more than any other forms of media to justify the cost. In the case of Facebook, advertisers accept a rate of 0.09 percent engagement. In contrast, print media has to have a response of 30-40 percent of their circulation to justify their advertising prices and direct mail 0.5 percent. Facebook advertising is much cheaper than either print or direct mail but they offer more volume.

Hate drives profits

At that rate, however, they need to be driving a lot of engagement and the best way to do that is to make people reactive, not responsive. Impulse buying is what advertisers want from Facebook, so as long as Meta can deliver that, advertisers will buy Facebook advertising.

Facebook does not want its users to think about what they are reading. They want them emotionally charged and clicking, liking, commenting and sharing without thinking about what they do. Content violating their standards, if it brings the engagement, it will probably not be removed, even if it is flagged by a moderator.

Related:   The Dichotomous Role of AI in Cybersecurity and How To Navigate It

That is why, when you report what seems to you as a grievous violation, it doesn’t go away and you get that very polite response that they have reviewed the content “and determined that it does not violate our standards.”

To recap, moderation is not censorship. It is the first filter in determining profitability of content, not its social value. Accusations of censorship drive engagement because it is a loaded, SEO-heavy term. So you see it fly in all forms of social media.

“Censorship” drives engagement

The term “censorship” used in a post will increase engagement by 20-30% over terms like “moderation”. It produces higher sharing rates due to emotional response, with increased debate in comment sections. The algorithmic impact is substantial, triggering higher visibility due to increased interaction and leads to longer retention. It is more likely to appear in “trending” sections. It creates urgency (“you need to see this before it’s removed”) and fear of missing out. It also appeals to people’s desire for restricted information. All of that rings up the cash register in the Meta accounting offices.

The downside of hate speech and disinformation dissemination is government fines could cut into profitability. So there was some consideration to that. Deregulation of objectionable content menas platforms are free, in the US, to drop into the gutter with the most vile, but profitable content they can spread. The EU fines likely to be imposed are a drop in the bucket financially and an acceptable cost of doing business.

A coming storm of hate

If you think fraud, racism, hate and disinformation on social media was bad for the past few years, you haven’t seen anything yet.

The last defense for civil public debate is… us. If you continue to support the revenue stream of Meta, X and even Alphabet, Reddit and the other commercial platforms, you contribute to the degradation of social policy. That is a harsh statement, I know, but it is the unvarnished truth.

There are options. The decentralized media of Mastodon, Bluesky, and secure messaging platforms, like Signal and Sessions are one answer. In those arenas, you are the moderator. You, not an algorithm, determine what is shared and who you interact with.

It is not censorship. It is responsible citizenship.

Lou Covey

Lou Covey is the Chief Editor for Cyber Protection Magazine. In 50 years as a journalist he covered American politics, education, religious history, women’s fashion, music, marketing technology, renewable energy, semiconductors, avionics. He is currently focused on cybersecurity and artificial intelligence. He published a book on renewable energy policy in 2020 and is writing a second one on technology aptitude. He hosts the Crucial Tech podcast.

Leave a Reply

Your email address will not be published. Required fields are marked *