LRT: One of the problems with Twitter moderation - and I'm not suggesting this is an innocent cause that is accidentally enabling abuse, but rather that it's a feature, from their point of view - is that the reporting categories available for us do not match up to the rules.

Now, Twitter's actual policy is that wishes or hopes for death or harm are the same as threats. That policy has been in place for years. But there's no report category for hoping someone dies. You can only report it as a threat.
Which gives the moderator, who doesn't spend long on any individual tweet, mental leeway to go, "Well, there's no threat here." and hit the button for "no violation found".
They have a rule that says persistent misgendering or other dehumanizing language is not tolerated, but again - there is no reporting category for that. We have to report it as hate against a group.
So again, the moderator looks at that tweet, briefly in isolation, and sees what, without context, might look neutral or matter of fact. A series of tweets referring to somebody consistently by the same set of pronouns or a statement that somebody is a man or woman.
I've said this before, but having a rule against misgendering or otherwise dehumanizing trans people and not enforcing it is worse than having no rule.

Because the rule's existence creates the impression that we have protections we don't.
So the people who dehumanize, misgender, and wish death upon us get the best of both worlds - they can freely do it over and over again while proclaiming themselves censored martyrs to free speech. They can use the "power" our supposed "protected status" gives us to foment hate.
There rules, their reporting tools, and their rulings all ultimately feel like they are each created/run by a different group of people who not only don't agree but haven't communicated with each other about what they're doing.
But again, that makes it sound like it's an innocent, well-intentioned mess and even if at one point it started out that way (and I'm not saying that it did, I'm saying *if*) at this point it's been going on so long and has been pointed out to them so many times, it's deliberate.
It is a deliberate choice to keep running their system this way.

Meanwhile, people who aren't acting in good faith can, will, and DO game the automated aspects of the system to suppress and harm their targets.
They can run coordinated and/or bot-assisted mass reporting campaigns to make sure their complaints get escalated or the system automatically steps in and locks accounts.
Another side of this is that, for peoples who have historically been targeted for death, there are all sorts of references that are ready-made for making EXPLICIT DEATH THREATS that to an untrained moderator looking at a tweet in isolation, might just seem like absurdism.
E.g., references to ways people died or had their corpses abused in the Holocaust, in slavery or Jim Crow America. References to lynching, to atomic bombings, to drone strikes.
And then, then we come to the fact that the people making the moderation decisions are making decisions, even on the stuff that "will not be tolerated".

A death threat is not supposed to be allowed on here even if it's a joke. That's Twitter's premise, not mine.
But a sizable chunk of Twitter's moderation pool has a hard time looking at, say, a straight white man threatening violence upon a woman, a gay person, a trans person, etc., and seeing it as serious. It's like background radiation. It's always there. Not alarming.
But anger, even without an explicit threat, from those groups directed against more powerful ones... that's alarming to the same people.
It's the Joker Principle. I know we're all sick of pop culture exegesis but I just can't let go of this one: "Nobody panics when things go according to plan."

A guy going "Haha get raped." is part of the plan. It's normal.

His target replying "FUCK OFF" is not. It's radical.
Things that strike the moderator as unusual, as radical, as alarming are more likely to get moderated.

Things that strike the moderator as "That's just how it is on this bitch of an earth." get a pass.
Helicopter rides. A trip to the lampshade factory.

And then the ultra-modern ones like "Banned from Minecraft in real life."

https://t.co/5xdHZmqLmM
And needless to say, all of this "confusion" and subjectivity in what are supposedly objective, zero tolerance rules that apply to everybody... they give people who *want* to protect and promote fascism and violence through moderation a lot of cover.

More from 12 Foot Tall Giant Alexandra Erin

More from Social media

Enter the thread if you dare. 😈

We’re counting down 13 of the best ways to Halloween on Snapchat. First up – matching Lens costumes for you and your pet.

https://t.co/J0Zn7CfM1q


Tis the season to slay some ghouls. Grab some friends and dive in to Zombie Rescue Squad from @PikPokGames. How long can you survive?

https://t.co/FC9dvafUiV


Is it even Halloween if you're not FREAKED OUT? Scare yourself silly with a Dead of Night S1 rewatch.

https://t.co/LtoE7yHgaG


Be careful! Things aren’t always what they seem. Our Lenses start off cute, but are filled with spooky surprises!

https://t.co/xq45JlYeQ7


Craving candy early? Our new stickers were made to satisfy your sweet tooth.
As we wait for the transition of power from despot to democrat, Facebook (Zuckerberg) has taken it upon itself to aid in the obstruction of that power transfer, facilitation of an insurrection narrative and disregard for the will of the American electorate.


In other words, the Social Media monopoly Facebook commands globally has gone full fascist in an attempt to preserve the corrupt and criminal hold on power by Republicans and Trump Administration.

Aiding and abetting a coup d’état.

As if there weren’t enough other reasons to dismantle Facebook’s monopoly, Zuckerberg is playing his cards and revealing clearly that Cambridge Analytica election interference was not just a onetime anomaly, but is now a feature of Facebook’s business model.

Megalomaniac Marc has now revealed the true colours of Fascist Facebook.

Facebook is a weapon to manipulate the masses. A tool to carry out disinformation campaigns with impunity.

And the response of the left... is to delete their Facebook account.

As if the deletion of a Facebook account will do anything. It might send a message that your virtues are principled, your morality superior. But it enables the weapon to be continued to gaslight and manipulate the electorate.

An inherent flaw in the left’s critical thinking.
Great bit of journalism here by Sophia :) fun fact, we had some verrrrry interesting conversations about what exactly the Trump campaign might be doing on TikTok.

So let’s talk about that!


Super glad I could be of help btw :P

Anyhoo: my background = senior web dev, data analysis a specialty, worked in online marketing/advertising a while back

You’ve got this big TikTok account that’s ostensibly all volunteer, just promoting Trump’s app because they’re politically minded and all that.

Noooooope. They’re being paid.

Sophia says it’s just possible (journalist speak I assume) but I know exactly what I’m looking at and these guys, Conservative Hype House, are getting paid to drive traffic and app installs for Trump.

So how do you know that, Claire?

Welp, they’re using an ad tracking system that has codes assigned to specific affiliates or incoming marketing channels. These are always ALWAYS used to track metrics for which the affiliate is getting paid.

You May Also Like