Shortly after acquiring Twitter, Elon Musk posted, "The bird is freed." But what did that mean in practice? The platform saw nearly a fivefold increase in the use of the N-word within 12 hours after the shift of ownership. The most engaged tweets were overtly antisemitic. The site was flooded with anonymous trolls spewing racist slurs and Nazi memes.
But, the surge in overt racism and bigotry wasn't the only sign of how drastically, and how rapidly, Twitter changed under Musk's ownership. The billionaire went on to restructure the company — firing top executives, laying off 50 percent of the company's staff, revamping the Twitter subscription service and reinstating numerous accounts that had been banned under the previous regime for violating the site's posted guidelines.
Many such accounts with a history of spreading conspiracy theories and hate speech subsequently purchased "blue checks" for $8 a month through the on-again, off-again Twitter Blue subscription and continue to spread misinformation and extreme content on the app, according to Media Matters.
Anti-LGBTQ accounts like Libs of TikTok and Gays Against Groomers had been suspended from Twitter several times for hateful conduct and spreading anti-LGBTQ rhetoric targeting Pride events and individuals. Both accounts have now been reinstated and carry the blue checkmark.
White supremacists Jason Kessler and Richard Spencer, who had their verifications revoked some time ago, have gotten them back under Musk. Kessler was a principal organizer of the 2017 Unite the Right rally in Charlottesville, and Spencer for a time was the most prominent neo-Nazi and overt white supremacist in America. (At least until the rise of Nick Fuentes.)
With previously suspended accounts returning to Twitter and continuing to spread conspiracy theories online, more ordinary users will inevitably encounter content that otherwise only existed on fringe platforms like 4chan and 8chan, said Gianluca Stringhini, assistant professor at Boston University, who studies cybersecurity and online safety.
Conspiracy theories and other forms of false information "make their way into Twitter and Facebook and mainstream platforms and that's when regular people see them and they suddenly become viral," Stringhini said. Hate speech and intolerance toward marginalized communities follow, he added, which can have multiple real-world effects, up to and including violence.
Musk is planning to launch a revised verification feature soon that will have different-colored check marks for businesses, government agencies and individuals, the Washington Post has reported.
Musk's earlier version of Twitter Blue, which offered users a blue check (but no significant form of verification) for $8/month, failed after accounts impersonating large corporations, political figures and other celebrities (including Musk himself) ran rampant on the site. Infamously, an account impersonating the pharmaceutical giant Eli Lilly caused the real company's stock to drop by more than 4%.
Twitter has recently disabled new sign-ups for that service and Musk tweeted on Nov. 25 that the company was "tentatively launching Verified on Friday next week." That presumably meant Dec. 2, which has now come and gone with no launch announcement
The platform's "legacy" blue checks from the pre-Musk era were predominantly used by large companies, journalists, politicians, celebrities and other public figures, who had to apply to Twitter and provide extensive identifying information. (Most staff members at Salon, for example, have blue checks that predate Musk's purchase.)
Essentially, the blue check has traditionally served as a strong indication that an account legitimately belonged to a named individual or identity with a reputation to uphold, Stringhini said. But what concerns him is now is the apparent absence of content moderation under Musk's greatly reduced staff.
"In my work, I try to automate content moderation and the texture of speech, but it is a very, very challenging problem, a nuanced problem," he said. "Even if you try and automate it as much as possible, you will always need to have a human making the final decision based on context."
Twitter's algorithm prioritizes tweets that attract the most engagement, whether or not the content is overtly inflammatory or hateful, Stringhini said. But the platform can always make the decision to demote the most noxious tweets and prevent them from going viral. Without a moderation team of actual humans tracking users, he noted, that is nearly impossible to do.
White supremacists are using the appearance of respectability offered by forums like Twitter to infiltrate public conversations, said Libby Hemphill, a professor at the University of Michigan's School of Information and the Institute for Social Research.
"They try to look presentable: If you think about the Unite the Right rally [in Charlottesville], you're looking at clean-cut white boys," Hemphill said. They use "similar strategies linguistically online" to appear "polite," avoiding overt racial slurs and profanity.
Elon Musk did society a favor, says Jennifer Grygiel. "He showed us the vulnerability of the public's access to information and how flawed it was. It's never been about free speech."
Previously banned white supremacists who are returning to Twitter have existing audiences, she said, which makes them "more dangerous than a new person trying to come up with an audience," since amplification on social media is largely a function of audience and reach.
"Right now, the cost of being hateful doesn't outweigh the benefits," Hemphill said. "I don't just mean the financial costs — there aren't enough social consequences for being hateful."
Elon Musk has himself contributed to the problem, Hemphill added, by "spreading anti-trans right-wing nonsense." She speculated that "losing $44 billion" — Musk's purchase price in taking Twitter private — "might be enough to get him to change his behavior. But he can afford it, so maybe not."
Even before Musk's takeover, Twitter served as a tool for spreading propaganda, said Grygiel, the Syracuse professor, saying that in a sense Musk had offered society a gift. "He showed us the vulnerability of the public's access to information and how flawed it was," Grygiel added. "Fundamentally, Twitter's model is flawed when it comes to public discourse. It's never been about free speech. It's always been about the speech of whoever owns it."
The platform continues to elevate powerful actors, including corporations and governments, and functions as a tool for enabling propagandists, Grygiel added. "This isn't a place where, essentially, we're getting social movements and the Arab Spring and Black Lives Matter. This is where you're going to see government voices lifted up over those of the free press."
Musk appears to run Twitter as a sovereign individual, in charge of who gets assigned what labels, Grygiel added, which changed the fundamental nature of Twitter. "He shifted it away from journalism and pointed it toward himself, and he dangles that in front of the government as a leverage point."
Academics and journalists, Grygiel suggested, need to consider "life post-Twitter" and congregate on a platform that doesn't get to decide "who is verified and worthy and who isn't."
Beyond examining the dangers that conspiracy theorists pose to Twitter and the widespread impact of misinformation, "we need to look at what is central to society and helping it function," Grygiel said, offering an answer. "News — and I'm talking free press-style news, free of the government. How can we get there in a social media world?"