Infamous Russian troll farm appears to be source of anti-Ukraine propaganda

Just before 11 a.m. Moscow Standard Time on March 1, after a night of Russian strikes on Kyiv and other Ukrainian cities, a set of Russian-language Twitter accounts spread a lie that Ukraine was fabricating civilian casualties.

One account created last year, @Ne_nu_Che, shared a video of a man standing in front of rows of dark gray body bags that appeared to be filled with corpses. As he spoke to the camera, one of the encased bodies behind him lifted its arms to stop the top of the bag from blowing away. The video was taken from an Austrian TV report about a climate change demonstration held in Vienna in February. But @Ne_nu_Che claimed it was from Ukraine.

“Propaganda makes mistakes too, one of the corpses came back to life right as they were counting the deaths of Ukraine’s civilians,” the tweet said.

Eight minutes later, another account, @Enot_Kremle_Bot, tweeted the same video. “I’M SCREAMING! One of the ‘corpses’ came back to life during a segment about civilian deaths in the Ukraine. Information war is reaching a new level,” they said.

Two other accounts created last fall within a few days of @Enot_Kremle_Bot soon shared the same video and accusations of fake civilian casualties. “Ukrainian propaganda does not sleep,” said one.

The Twitter profiles are part of a pro-Putin network of dozens of accounts spread across Twitter, TikTok and Instagram whose behavior, content and coordination are consistent with Russian troll factory the Internet Research Agency, according to Darren Linvill, a Clemson University professor who, along with another professor, Patrick Warren, has spent years studying IRA accounts.

The IRA burst into the American consciousness after its paid trolls used thousands of English-language accounts across social media platforms to influence American voters during the 2016 presidential election. The IRA was at the center of a 2018 Department of Justice criminal indictment for its alleged effort to “interfere with elections and political processes.”

“These accounts express every indicator that we have to suggest they originate with the Internet Research Agency,” Linvill said. “And if they aren’t the IRA, that’s worse, because I don’t know who’s doing it.”

An analysis of the accounts’ activity by the Clemson Media Forensics Hub and ProPublica found they posted at defined times consistent with the IRA workday, were created in the same time frame and posted similar or identical text, photos and videos across accounts and platforms. Posts from Twitter accounts in the network dropped off on weekends and Russian holidays, suggesting the posters had regular work schedules.

Many of the accounts also shared content from facktoria.com, a satirical Russian website that began publishing in February. Its domain registration records are private, and it’s unclear who operates it. Twitter removed its account after being contacted by ProPublica.

The pro-Putin network included roughly 60 Twitter accounts, over 100 on TikTok, and at least seven on Instagram, according to the analysis and removals by the platforms. Linvill and Warren said the Twitter accounts share strong connections with a set of hundreds of accounts they identified a year ago as likely being run by the IRA. Twitter removed nearly all of those accounts. It did not attribute them to the IRA.

The most successful accounts were on TikTok, where a set of roughly a dozen analyzed by Clemson researchers and ProPublica racked up more than 250 million views and over 8 million likes with posts that promoted Russian government statements, mocked President Joe Biden and shared fake Russian fact-checking videos that were revealed by ProPublica and Clemson researchers earlier this week. On Twitter, they attacked jailed Russian opposition leader Alexei Navalny and blamed the West for preventing Russian athletes from competing under the Russian flag in the Olympics.

Late last month, the network of accounts shifted to focus almost exclusively on Ukraine, echoing similar narratives and content across accounts and platforms. A popular post by the account @QR_Kod accused the Ukrainian military of using civilians as human shields. Another post by @QR_Kod portrayed Ukraine as provoking Russia at the behest of its NATO masters. Both tweets received hundreds of likes and retweets and were posted on the same day as the body bag video. At least two Twitter accounts in the network also shared fake fact-checking videos.

The findings indicate that professionalized trolling remains a force in domestic Russian propaganda efforts and continues to adapt across platforms, according to Linvill.

“I can’t stress enough the importance of understanding the way that this is a tool for Putin to control narratives among his own people, a way for him to lie to his own people and control the conversation,” Linvill said. “To suggest that the West is blanketly winning this information war is true only in some places. Putin doesn’t have to win the information war, he just has to hold his ground. And these accounts are helping him do that.”

After inquiries from ProPublica, all of the active accounts were removed from TikTok, and nearly all were suspended by Twitter. Meta said it removed one Instagram account for violating its spam policy and that the others did not violate its rules. None of the platforms attribute the accounts to the IRA. Twitter and TikTok said the accounts engaged in coordinated behavior or other activity that violated platform policies.

A TikTok spokesperson said the initial eight accounts shared with it violated its policy against “harmful misinformation.” TikTok removed an additional 98 accounts it determined were part of the same pro-Putin network.

“We continue to respond to the war in Ukraine with increased safety and security resources to detect emerging threats and remove harmful misinformation,” said a statement provided by the company. “We also partner with independent fact-checking organizations to support our efforts to help TikTok remain a safe and authentic place.”

A Twitter spokesperson called the roughly 60 accounts it removed “malicious” and said they violated its platform manipulation and spam policy, but declined to be more specific. They said the company had determined that the active accounts shared by ProPublica had violated its policies prior to being asked about them. Twitter decided to leave the set of 37 accounts online “to make it harder for bad actors to understand our detections,” according to the spokesperson.

The accounts were removed by Twitter within 48 hours of ProPublica contacting the company about them. The week before, Twitter removed 27 accounts that the Clemson researchers also identified as likely IRA accounts.

“Our investigation into these accounts remains ongoing, and we will take further action when necessary,” said a statement from a Twitter spokesperson. “As is standard, when we identify information operation campaigns that we can reliably attribute to state-linked activity, we will disclose this to the public.”

Twitter declined to offer more details on why it left roughly 30 accounts that it identified as violative online to continue spreading propaganda. It also declined to comment on connections between the roughly 60 accounts in this recent network and the hundreds of accounts flagged by Linvill and Warren last spring as possible IRA profiles. Linvill said he identified the recent accounts largely based on their commonality with the previous set of 200.

“I connect these current accounts to the ongoing activity over the course of the past year by carefully tracking accounts’ tactics, techniques and procedures,” he said.

Platforms may be hesitant to attribute activity to the IRA in part because the agency has adapted and made its efforts harder to expose, according to Linvill. But he said social platforms should disclose more information about the networks it removes, even if it can’t say with certainty who is running them.

“In every other area of cybersecurity, dangerous activity from bad actors is disclosed routinely without full confidence in the source of the activity. We name and disclose computer viruses or hacker groups, for instance, because that is in the public interest,” he said. “The platforms should do the same. The Russian people should know that some sophisticated and well-organized group is covertly using social media to encourage support for Putin and the war in Ukraine.”

The Internet Research Agency is a private company owned by Yevgeny Prigozhin, a Russian entrepreneur known as “Putin’s Chef.” Prigozhin is linked to a sprawling empire ranging from catering services to the military mercenary company Wagner Group, which was reportedly tasked with assassinating President Volodymyr Zelenskyy. The IRA launched in St. Petersburg in 2013 by hiring young internet-savvy people to post on blogs, discussion forums and social media to promote Putin’s agenda to a domestic audience. After being exposed for its efforts to influence the 2016 U.S. election, the IRA attempted to outsource some of its English-language operations to Ghana ahead of 2020. Efforts to reach Prigozhin were unsuccessful.

But it never stopped its core work of influencing Russian-speaking audiences. The IRA is part of a sprawling domestic state propaganda operation whose current impact can be seen by the number of Russians who refuse to believe that an invasion has happened, while asserting that Ukrainians are being held hostage by a Nazi coup.

Prior to the invasion, accounts in the network identified by the Clemson Media Forensics Hub and ProPublica celebrated Russian achievements at the Olympics.

“They were deep in the Olympics, tweeting about Russian victories and the Olympics and how the Russians were being robbed by the West and not allowed to compete under their own flag,” Linvill said.

After the invasion began, they moved to unify people behind Putin’s war.

“It was a slow shift,” he said. “And this is something I’ve seen from the IRA before: When a significant world event happens, they don’t always know immediately how to respond to it.”

By late February, the network had found its voice in part by echoing messages from Russian officialdom. The accounts justified the invasion, blamed NATO and the West and seeded doubts about civilian death tolls and Russian military setbacks. When sanctions kicked in and Western companies began pulling out of Russia, they said it was good news because Russian products are better. (Two Twitter accounts in the network shared the same video of a man smashing an old iPad with a hammer.)

“These accounts were sophisticated, they knew their audience, and they got engagement far surpassing the number of followers that they had,” Linvill said.

Paul Stronski, a senior fellow in the Russia and Eurasia Program at the Carnegie Endowment for Peace, reviewed content shared by more than two dozen of the Twitter accounts prior to their suspension. “A lot of this is the type of stuff I would expect from Russian trolls,” said Stronski, who reads and speaks Russian.

He said many of the accounts adopt an approachable and humorous tone to generate engagement and appear relatable to younger audiences present on social media.

“They’re very critical of prominent Russians who have criticized this war, questioning their patriotism,” Stronski said. “They’re saying in effect that during wartime you shouldn’t be criticizing your own. You should be lining up behind the state.”

When President Biden flubbed the pronunciation of “Ukrainians” during his recent State of the Union address, several of the accounts on Twitter, TikTok and Instagram shared the clip and mocked him. While that clip spread widely outside of the suspected IRA network as well, the accounts often spread more obscure content in coordination. Multiple Twitter accounts, for example, shared a screenshot of a Russian actor’s tweet that he cared more about being able to use Apple Pay than the war in Ukraine. The accounts criticized him, with one warning that “the internet remembers everything.”

Before the account takedowns, the Russian government had begun closing off the country from global social media and information sources. It restricted access to Twitter and blocked Facebook. The Russian legislature passed a law that allows for a 15-year sentence for people who contradict the official government position on the war. As a result, TikTok announced it would pause uploads of new videos in Russia.

Some of the accounts in the network saw the writing on the wall and prepared their audience to move to Telegram, a Russian messaging service.

“Friends! With happiness I'd like to tell you that I decided to make the t.me/enot_kremlebot channel, in which you will see analytics to the fullest extent. Twitter could block us any minute!” tweeted @Enot_Kremle_Bot on March 5. “I really don’t want to lose my treasured and close-to-my-heart audience! Go to this link and subscribe.”

In the Ukraine conflict, fake fact-checks are being used to spread disinformation

by Craig Silverman and Jeff Kao

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.

On March 3, Daniil Bezsonov, an official with the pro-Russian separatist region of Ukraine that styles itself as the Donetsk People’s Republic, tweeted a video that he said revealed “How Ukrainian fakes are made.”

The clip showed two juxtaposed videos of a huge explosion in an urban area. Russian-language captions claimed that one video had been circulated by Ukrainian propagandists who said it showed a Russian missile strike in Kharkiv, the country’s second-largest city.

But, as captions in the second video explained, the footage actually showed a deadly arms depot explosion in the same area back in 2017. The message was clear: Don’t trust footage of supposed Russian missile strikes. Ukrainians are spreading lies about what’s really going on, and pro-Russian groups are debunking them. (Bezsonov did not respond to questions from ProPublica.)

It seemed like yet another example of useful wartime fact-checking, except for one problem: There’s little to no evidence that the video claiming the explosion was a missile strike ever circulated. Instead, the debunking video itself appears to be part of a novel and disturbing campaign that spreads disinformation by disguising it as fact-checking.

Researchers at Clemson University’s Media Forensics Hub and ProPublica identified more than a dozen videos that purport to debunk apparently nonexistent Ukrainian fakes. The videos have racked up more than 1 million views across pro-Russian channels on the messaging app Telegram, and have garnered thousands of likes and retweets on Twitter. A screenshot from one of the fake debunking videos was broadcast on Russian state TV, while another was spread by an official Russian government Twitter account.

The goal of the videos is to inject a sense of doubt among Russian-language audiences as they encounter real images of wrecked Russian military vehicles and the destruction caused by missile and artillery strikes in Ukraine, according to Patrick Warren, an associate professor at Clemson who co-leads the Media Forensics Hub.

“The reason that it’s so effective is because you don’t actually have to convince someone that it’s true. It’s sufficient to make people uncertain as to what they should trust,” said Warren, who has conducted extensive research into Russian internet trolling and disinformation campaigns. “In a sense they are convincing the viewer that it would be possible for a Ukrainian propaganda bureau to do this sort of thing.”

Russia’s Feb. 24 invasion of Ukraine unleashed a torrent of false and misleading information from both sides of the conflict. Viral social media posts claiming to show video of a Ukrainian fighter pilot who shot down six Russian planes — the so-called “Ghost of Kyiv” — were actually drawn from a video game. Ukrainian government officials said 13 border patrol officers guarding an island in the Black Sea were killed by Russian forces after unleashing a defiant obscenity, only to acknowledge a few days later that the soldiers were alive and had been captured by Russian forces.

For its part, the Russian government is loath to admit such mistakes, and it launched a propaganda campaign before the conflict even began. It refuses to use the word “invasion” to describe its use of more than 100,000 troops to enter and occupy territory in a neighboring country, and it is helping spread a baseless conspiracy theory about bioweapons in Ukraine. Russian officials executed a media crackdown culminating in a new law that forbids outlets in the country from publishing anything that deviates from the official stance on the war, while blocking Russians’ access to Facebook and the BBC, among other outlets and platforms.

Media outlets around the world have responded to the onslaught of lies and misinformation by fact-checking and debunking content and claims. The fake fact-check videos capitalize on these efforts to give Russian-speaking viewers the idea that Ukrainians are widely and deliberately circulating false claims about Russian airstrikes and military losses. Transforming debunking into disinformation is a relatively new tactic, one that has not been previously documented during the current conflict.

“It’s the first time I’ve ever seen what I might call a disinformation false-flag operation,” Warren said. “It’s like Russians actually pretending to be Ukrainians spreading disinformation.”

The videos combine with propaganda on Russian state TV to convince Russians that the “special operation” in Ukraine is proceeding well, and that claims of setbacks or air strikes on civilian areas are a Ukrainian disinformation campaign to undermine Russian confidence.

It’s unclear who is creating the videos, or if they come from a single source or many. They have circulated for roughly two weeks, first appearing a few days after Russia invaded. The first video Warren spotted claimed that a Ukrainian flag was removed from old footage of a military vehicle and replaced with a Z, a now-iconic insignia painted on Russian vehicles participating in the invasion. But when he went looking for examples of people sharing the misleading footage with the Z logo, he came up empty.

“I’ve been following [images and videos of the war] pretty carefully in the Telegram feeds, and I had never seen the video they were claiming was a propaganda video, anywhere,” he said. “And so I started digging a little more.”

Warren unearthed other fake fact-checking videos. One purported to debunk false footage of explosions in Kyiv, while others claimed to reveal that Ukrainians were circulating old videos of unrelated explosions and mislabeling them as recent. Some of the videos claim to debunk efforts by Ukrainians to falsely label military vehicles as belonging to the Russian military.

“It’s very clear that this is targeted at Russian-speaking audiences. They’re trying to make people think that when you see destroyed Russian military hardware, you should be suspicious of that,” Warren said.

There’s no question that older footage of military vehicles and explosions have circulated with false or misleading claims that connect them to Ukraine. But in the videos identified by Warren, the allegedly Ukrainian-created disinformation does not appear to have circulated prior to Russian-language debunkings.

Searches for examples of the misleading videos came up empty across social media and elsewhere. Tellingly, none of the supposed debunking videos cite a single example of the Ukrainian fakes being shared on social media or elsewhere. Examination of the metadata of two videos found on Telegram appears to provide an explanation for that absence: Whoever created these videos simply duplicated the original footage to create the alleged Ukrainian fake.

A digital video file contains embedded data, called metadata, that indicates when it was created, what editing software was used and the names of clips used to create a final video, among other information. Two Russian-language debunking videos contain metadata that shows they were created using the same video file twice — once to show the original footage, and once to falsely claim it circulated as Ukrainian disinformation. Whoever created the video added different captions or visual elements to fabricate the Ukrainian version.

“If these videos were what they purport to be, they would be a combination of two separate video files, a ‘Ukrainian fake’ and the original footage,” said Darren Linvill, an associate professor at Clemson who co-leads the Media Forensics Hub with Warren. “The metadata we located for some videos clearly shows that they were created by duplicating a single video file and then editing it. Whoever crafted the debunking video created the fake and debunked it at the same time.”

The Media Forensics Hub and ProPublica ran tests to confirm that a video created using two copies of the same footage will cause the file name to appear twice in the video’s metadata.

Joan Donovan, the research director of Harvard’s Shorenstein Center on Media, Politics and Public Policy, called the videos “low-grade information warfare.” She said they don’t need to spread widely on social media to be effective, since their existence can be cited by major Russian media sources as evidence of Ukraine’s online disinformation campaign.

“It works in conjunction with state TV in the sense that you can put something like this online and then rerun it on TV as if it’s an example of what’s happening online,” she said.

That’s exactly what happened on March 1, when state-controlled Channel One aired a screenshot taken from one of the videos identified by Warren. The image was shown during a morning news program as a warning to “inexperienced viewers” who might be fooled by false images of Ukrainian forces destroying Russian military vehicles, according to a BBC News report.

“Footage continues to be circulated on the internet which cannot be described as anything but fake,” the BBC quoted a Channel One presenter telling the audience.

At least one Russian government account has promoted an apparent fake debunking video. On March 4, the Russian Embassy in Geneva tweeted a video with a voiceover that said “Western and Ukrainian media are creating thousands of fake news on Russia every day.” The first example showed a video where the letter “Z” was supposedly superimposed onto a destroyed military vehicle.

Another video that circulated on Russian nationalist Telegram channels such as @rlz_the_kraken, which has more than 200,000 subscribers, claimed to show that fake explosions were added to footage of buildings in Kyiv. The explosions and smoke were clearly fabricated, and the video claims they were added by Ukraininans.

But as with the other fake debunking videos, reverse image searches didn’t turn up any examples of the supposedly manipulated video being shared online. The metadata associated with the video file indicates that it may have been manipulated to add sound and other effects using Microsoft Game DVR, a piece of software that records clips from video games.

The fake debunking videos have predominantly spread on Russian-language Telegram channels with names like @FAKEcemetary. In recent days they made the leap to other languages and platforms. One video is the subject of a Reddit thread where people debated the veracity of the footage. On Twitter, they are being spread by people who support Russia, and who present the videos as examples of Ukrainian disinformation.

Francesca Totolo, an Italian writer and supporter of the neo-fascist CasaPound party, recently tweeted the video claiming that a Ukrainian flag had been removed from a military vehicle and replaced with a Russian Z.

“Now wars are also fought in the media and on social networks,” she said.

NOW WATCH: GOP congressman faces furious backlash for ‘outright lying’ about Trump’s infamous call with Zelenskyy

Steve Scalise faces furious backlash for ‘outright lying’ about Trump’s infamous call with Zelenskywww.youtube.com