At the end of his presidency, Donald Trump was frequently reprimanded by Twitter for spreading false statements about the 2020 election. But a recent study found that flagging his tweets as misinformation did little to stop their spread.
In fact, Trump's tweets that were marked as containing misinformation spread further than the tweets that received no intervention from Twitter, researchers at New York University's Center for Social Media and Politics found. Their report, released Tuesday, analyzed more than 1,100 of the former president's tweets from the start of November 2020 to Jan. 8, the day Trump was suspended from the social media platform.
Of the tweets analyzed, 303 received "soft intervention" from Twitter, meaning they were labeled as disputed and potentially misleading. Sixteen tweets contained egregious enough falsehoods to receive "hard intervention," and were removed from the site or blocked from user engagement. The remaining 830 tweets received no intervention from Twitter.
While hard interventions did stop select misinformation from spreading further on Twitter, soft interventions did not have the same effect. The report found that messages with misinformation labels received more user engagement than those without interference.
But even blocking false messages on Twitter wasn't completely sufficient in combating the spread of Trump's worst misinformation, the report found. His tweets that were removed from the platform spiked in engagement on other social media outlets, namely Facebook, Instagram and Reddit.
Sign up for The Fulcrum newsletter
However, the report notes, these findings do not necessarily mean Twitter's misinformation warning labels were ineffective or led to a so-called "Streisand effect," wherein an attempt to hide or remove information unintentionally draws more attention to it.
"It's possible Twitter intervened on posts that were more likely to spread, or it's possible Twitter's interventions caused a backlash and increased their spread," said Zeve Sanderson, one of the report's co-authors.
"Nonetheless, the findings underscore how intervening on one platform has limited impact when content can easily spread on others," said Megan Brown, another co-author of the report. "To more effectively counteract misinformation on social media, it's important for both technologists and public officials to consider broader content moderation policies that can work across social platforms rather than singular platforms."