Musk's 'Twitter Files' offer insight into the raw, complicated and thankless task of moderation

Twitter's new owner Elon Musk is feverishly promoting his "Twitter Files": selected internal company communications, painstakingly tweeted by sympathetic amanuenses. But Musk's obvious belief that he posted a partisan kraken is misguided - far from conspiracy or systemic abuse, the files are valuable insight behind the curtain of large-scale moderation, hinting at the Sisyphus works undertaken by all social media platforms.

For a decade, companies like Twitter, YouTube, and Facebook have performed an elaborate dance to keep the details of their moderation processes equally out of the reach of bad actors, regulators, and the press.

Revealing too much would expose processes to abuse by spammers and scammers (who in effect profit from every leaked or published detail), while revealing too little leads to damaging reports and rumors because they lose control of the story. In the meantime, they must be prepared to justify and document their methods or face censorship and fines from government authorities.

The upshot is that while everyone knows some how exactly these companies inspect, filter, and curate the content posted on their platforms, it's just enough to be sure that what what we see is just the tip of the iceberg.

Sometimes there are exposes of the methods we suspected - hourly contractors clicking violent and sexual images, a heinous but seemingly necessary industry. Sometimes companies overstate their hands, such as repeated claims about how AI is revolutionizing moderation and subsequent reports that AI systems for this purpose are impenetrable and unreliable.

What almost never happens (generally companies don't unless forced to) is that the actual large-scale content moderation tools and processes are exposed unfiltered. And that's what Musk did, perhaps at his peril, but surely for the great benefit of anyone who's ever wondered what moderators actually do, say, and click when making decisions that can affect millions of people.

Pay no attention to the honest and complex conversation behind the curtain

Chains of emails, Slack conversations, and screenshots (or rather screenshots) posted over the past week provide insight into this important and poorly understood process. What we see is a bit of the raw material, which is not the partisan illuminati some expected – although it is clear from its very selective presentation that this is what we are meant to perceive. .

Far from it: those involved are by turns cautious and confident, practical and philosophical, outspoken and accommodating, showing that the choice to limit or prohibit is not made arbitrarily but according to an evolving consensus of points of view. opposite view.

(Update: Moments after I posted this, a new thread started which is more or less the same - serious discussions of complex issues in coordination with experts, law enforcement and others.)

Ahead of the choice to temporarily restrict the Hunter Biden laptop story — probably at this point the most controversial moderation decision in recent years, behind Trump's ban — there is no partisanship nor conspiracy insinuated by the explosive packaging of the documents.

Instead, we find serious, thoughtful people trying to reconcile conflicting and inadequate definitions and policies: what constitutes "pirated" content? How confident are we in this or that assessment? What is a proportionate response? How to communicate it, to whom and when? What are the consequences if we do, if we don't limit? What precedents are we setting or breaking?

The answers to these questions are not at all obvious, and are the sort of things usually chopped up after months of research and discussion, or even in court (legal precedents affect legal language and repercussions). And they had to be done quickly, before the situation somehow got out of control. Dissent from inside and outside (from a US Representative, no less - ironically, doxxed in the thread with Jack Dorsey in violation of the same policy) was considered and honestly incorporated.

"This is an emerging situation where the facts remain unclear," said former Trust and Safety chief Yoel Roth. "We have chosen to include a disclaimer and prevent amplification of this content."

Some question the decision. Some question the facts as presented. Others say it is not supported by their reading of the policy. They say they have to do the ad hoc basis...

Musk's 'Twitter Files' offer insight into the raw, complicated and thankless task of moderation

Twitter's new owner Elon Musk is feverishly promoting his "Twitter Files": selected internal company communications, painstakingly tweeted by sympathetic amanuenses. But Musk's obvious belief that he posted a partisan kraken is misguided - far from conspiracy or systemic abuse, the files are valuable insight behind the curtain of large-scale moderation, hinting at the Sisyphus works undertaken by all social media platforms.

For a decade, companies like Twitter, YouTube, and Facebook have performed an elaborate dance to keep the details of their moderation processes equally out of the reach of bad actors, regulators, and the press.

Revealing too much would expose processes to abuse by spammers and scammers (who in effect profit from every leaked or published detail), while revealing too little leads to damaging reports and rumors because they lose control of the story. In the meantime, they must be prepared to justify and document their methods or face censorship and fines from government authorities.

The upshot is that while everyone knows some how exactly these companies inspect, filter, and curate the content posted on their platforms, it's just enough to be sure that what what we see is just the tip of the iceberg.

Sometimes there are exposes of the methods we suspected - hourly contractors clicking violent and sexual images, a heinous but seemingly necessary industry. Sometimes companies overstate their hands, such as repeated claims about how AI is revolutionizing moderation and subsequent reports that AI systems for this purpose are impenetrable and unreliable.

What almost never happens (generally companies don't unless forced to) is that the actual large-scale content moderation tools and processes are exposed unfiltered. And that's what Musk did, perhaps at his peril, but surely for the great benefit of anyone who's ever wondered what moderators actually do, say, and click when making decisions that can affect millions of people.

Pay no attention to the honest and complex conversation behind the curtain

Chains of emails, Slack conversations, and screenshots (or rather screenshots) posted over the past week provide insight into this important and poorly understood process. What we see is a bit of the raw material, which is not the partisan illuminati some expected – although it is clear from its very selective presentation that this is what we are meant to perceive. .

Far from it: those involved are by turns cautious and confident, practical and philosophical, outspoken and accommodating, showing that the choice to limit or prohibit is not made arbitrarily but according to an evolving consensus of points of view. opposite view.

(Update: Moments after I posted this, a new thread started which is more or less the same - serious discussions of complex issues in coordination with experts, law enforcement and others.)

Ahead of the choice to temporarily restrict the Hunter Biden laptop story — probably at this point the most controversial moderation decision in recent years, behind Trump's ban — there is no partisanship nor conspiracy insinuated by the explosive packaging of the documents.

Instead, we find serious, thoughtful people trying to reconcile conflicting and inadequate definitions and policies: what constitutes "pirated" content? How confident are we in this or that assessment? What is a proportionate response? How to communicate it, to whom and when? What are the consequences if we do, if we don't limit? What precedents are we setting or breaking?

The answers to these questions are not at all obvious, and are the sort of things usually chopped up after months of research and discussion, or even in court (legal precedents affect legal language and repercussions). And they had to be done quickly, before the situation somehow got out of control. Dissent from inside and outside (from a US Representative, no less - ironically, doxxed in the thread with Jack Dorsey in violation of the same policy) was considered and honestly incorporated.

"This is an emerging situation where the facts remain unclear," said former Trust and Safety chief Yoel Roth. "We have chosen to include a disclaimer and prevent amplification of this content."

Some question the decision. Some question the facts as presented. Others say it is not supported by their reading of the policy. They say they have to do the ad hoc basis...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow