from the a-shadowban-by-any-other-name dept
So, yeah, I wrote a big long thing debunking the first round of the “Twitter Files” but there’s no way I’m going to make myself do more of that for every stupid thread of the “Twitter Files” being tweeted out. Just know that, having read all of the released “Twitter Files” threads so far, they are all just as ridiculous as the first one. They are all written by people who appear to have (1) no idea what they’re looking at (2) no interest in talking to anyone who does understand it and (3) no concern about presenting them in an extremely misleading light in an effort to push a narrative that is not even remotely supported by what they’re sharing.
So far, to anyone who actually has been following the trust & safety / content moderation space over the last five to ten years, what the actual files have shown is a supremely competent trust & safety team, that was put in an impossible position, and actually bent over backwards to try to be thoughtful and careful about their decision making, rather than ad hoc and emotionally driven. Over and over and over again, the files seem to show not (as a bunch of people insisted) a bunch of “woke” ideologues suppressing opposing ideologies, but (as we’ve highlighted) a careful, thoughtful team, trying only to figure out the best way to stop assholes from being assholes — and doing so by trying to follow the rules they had set for themselves, though (as ALWAYS is the case in trust & safety) realizing that assholes are always evolving and policies sometimes have to change to evolve with the latest variant of asshole.
I did want to call out, though, that one of the ridiculously laughable “big reveals,” this time from Bari Weiss, was the well known fact that Twitter would “deboost” some users from trending and algorithms, and have them appear lower in replies. That wasn’t new. The company announced it. It was covered in detail in the media.
Much of the controversy last week was over the term “shadowban.” A lot of people insist that it has always meant any effort to limit the visibility of a user. But… that’s wrong. Historically, the term was really only used to mean a very particular type of limited visibility: one where those hit with it (trolls, spammers) could post, and think they’re posting normally, but only they could see their own posts.
The problem is that, as with so many things, a bunch of Trumpist grifters took a word that meant something real, and turned it into any kind of de-amplification. That happened in 2018 when Trump flipped out about a Twitter bug that accidentally downranked a bunch of people, including but not limited to some prominent Republicans in search results. Back in 2018, I wrote about how that was the wrong use of the word. Soon after, Twitter came out with its own explainer, which also clearly defined the original meaning of shadowbanning and said “that’s not what we do,” but explained (again pretty clearly) that tweets do get ranked and can be minimized in the algorithm, search, and replies. But those who follow them will still see them (unlike in a shadowban).
So much of the “controversy” over this was focused on the fact that a bunch of people only learned about the term “shadowban” from the misrepresented story in 2018, and none of them bothered to educate themselves in the half-decade since then. Now, language changes over time, so you can argue that the new definition of shadowbanning is how it’s commonly used today (though, I’m not convinced that’s true). However, even then you can’t say that Twitter somehow “misled” people, because (again) it very clearly stated which definition it was using and at the same time explained that users could get downranked in the algorithm and search.
But Bari Weiss misleadingly presented these features, which internally Twitter referred to as “visibility filters,” as Twitter lying about not shadowbanning. But… that’s wrong. And it’s obviously wrong to anyone who bothered to read what has already been publicly stated quite clearly.
Elon himself seemed to make a big deal out of this, and even falsely claimed that Weiss showed that this tool was only used against conservatives (it wasn’t and she showed nothing at all to support that). But the really bizarre part in all of this is Elon himself has claimed that he wants to do the same thing as his grand solution to content moderation, saying the company’s “new” policy “is freedom of speech, but not freedom of reach” and that “negative” tweets “will be max deboosted.”
Except… as noted, that wasn’t a new policy at all. It was the old policy, which Twitter had been very public about. So it seems particularly disingenuous to claim that the old Twitter was doing something nefarious when it’s literally (1) the same thing they talked about publicly and (2) the same thing Elon says is his own brilliant solution.
But the story gets even dumber. You see, one of the Twitter accounts that Elon absolutely hates is the “@ElonJet” account that tracks where Elon’s private jet is flying based on public data. Elon has long hated this account, and once offered the guy behind it $5k to take it down. Last month, he also claimed that he would leave the account up to prove his “commitment to free speech.”
However, Jack Sweeney, the guy behind the account, has now revealed via leak from a Twitter employee that just a few days before Bari Weiss’s “big reveal” about the “evil old Twitter shadowbanning,” Twitter’s new trust & safety boss, Ella Irwin, demanded that the Elonjet account be, well, max deboosted (in Elon’s terminology). In internal Twitter terminology it was “apply heavy VF to @elonjet immediately.” “VF” standing for “visibility filter.”
Here’s the thread from Sweeney:
So, uh, yeah. Based on all that, as reported by the Daily Beast, it sure looks like Musk absolutely knew that this tool was already available to Twitter, and used it against an account he didn’t like.
And while it’s only a single line screenshot, and perhaps there is more context, I’ll just note how different that appears from the screenshots being revealed in the official “Twitter files,” in which there don’t seem to be random “suppress this account!” commands like what we see from Irwin above, but rather open discussions about “does this violate the rules?” and pushback from other employees to make sure that they’re being as fair and reasonable as possible.
We keep pointing out that Elon seems to be on the path of reinventing every innovation Twitter already had done, but doing it much, much worse, but this one seems particularly nefarious. Because just as he’s trying to whip everyone up into a frenzy by (misleadingly) claiming that this evil tool was secret and used to silence people not for rules violations, but personal whims… he was apparently using the very same tool based on his personal whims and feelings.
Filed Under: ella irwin, elon musk, elonjet, shadowban, shadowbanning, visibility filters