Facebook knows that disclaimers on Trump’s misinformation do not work: report
Donald Trump speaking with supporters at an event hosted by Students for Trump and Turning Point Action at Dream City Church in Phoenix, Arizona in 2020. (Gage Skidmore/Flickr)

Posts created by President Donald J. Trump still account for some of the most engaging entries on Facebook, even though the social media network's internal data shows that recent labels on his posts decrease reshares by about 8 percent, Buzzfeed News reported Monday.


The labels, which are referred to as "informs," actually do very little to avoid the posts being reshared to propagate potentially false information.

"We have evidence that applying these informs to posts decreases their reshares by ~8 percent,” the data scientists said. “However, given that Trump has so many shares on any given post, the decrease is not going to change shares by orders of magnitude.”

The data scientist noted that adding the labels was not expected to reduce the spread of false content. Instead, they are used “to provide factual information in context to the post.”

"Ahead of this election we developed informational labels, which we applied to candidate posts with a goal of connecting people with reliable sources about the election," Facebook spokesperson Liz Bourgeois said in a statement, adding that labels were "just one piece of our larger election integrity efforts."

Earlier this year, Facebook took down a post by Trump, but only because it violated the company’s rules around COVID-19 misinformation.

“We have a responsibility to help maintain the integrity of elections to clear up confusion and to provide credible, authoritative information when we can,” Facebook CEO Mark Zuckerberg told employees during a companywide meeting on Oct. 15. While he discussed the use of labels during that talk, he made no mention of efforts to limit the spread of Trump’s election misinformation.

Flipping over to Twitter, officials there have been more aggressive in preventing Trump's inaccurate and, oftentimes, inflammatory tweets from being liked or retweeted. Buzzfeed News revealed that last week, the company said it had labeled about 300,000 tweets for misleading information about the election, while restricting more than 450 from being liked or retweeted. They said they saw a decrease of about 29 percent engagement.

“Is there any induction that the ‘this post might not be true’ flags have continued to be effective at all in slowing misinformation spread?” asked a Facebook employee on the company’s internal message board. “I have a feeling people have quickly learned to ignore these flags at this point. Are we limiting reach of these posts at all or just hoping that people will do it organically?”

“The fact that we refuse to hold accounts with millions of followers to higher standards to everyone else (and often they get lower standards) is one of the most upsetting things about working here,” said another employee.