Rosenfeld is the editor and chief correspondent of Voting Booth, a project of the Independent Media Institute.
As 2024’s elections loom, the biggest worries voiced by officials and democracy advocates concern the reach of mistaken or deceptive narratives — misinformation and disinformation — and their persuasive power to shape political beliefs, identities, and provoke threats and violence.
“It’s the minds of voters,” replied Michigan Secretary of State Jocelyn Benson, a Democrat, when asked at the 2023 Aspen Cyber Summit about the biggest threat to 2024’s elections. Benson emphasized that it was not the reliability of the voting system or cybersecurity. “It’s the confusion, and chaos, and the sense of division, and the sense of disengagement that bad actors are very much trying to instill in our citizenry.”
“What I’m worried about most for the 2024 election — election vigilantism,” Marc Elias, a top Democratic Party lawyer, said during a recent “Defending Democracy” podcast. “Election vigilantism, to put it simply, is when individuals or small groups act in a sort of loosely affiliated way to engage in voter harassment, voter intimidation, misinformation campaigns or voter challenges.”
Such assessments are increasingly heard in election defense circles. So, too, is a consensus that the best response is recruiting “trusted voices,” or credible people in communities, online or otherwise, who will quickly say “not so fast,” attest to the process, and, hopefully, be heard before passions trample facts or run amok.
Sign up for The Fulcrum newsletter
“All the tools we need to instill confidence in our elections exist,” Benson told the cyber summit. “We just have to get them — not just in the hands of trusted voices, but then communicate effectively to the people who need to hear them.”
But communicating effectively is not simple, according to the nation’s leading scholars and researchers who study how rumors spread online, and how people use online content to make sense of what is happening during crises.
A decade ago, when Stanford University computer science graduate turned professional basketball player Kate Starbird returned to academia, she was interested in studying how social media could be helpful in crises. Starbird, the future co-founder of the University of Washington’s Center for an Informed Public, helped to create a field known as “crisis informatics.”
She and her colleagues increasingly were drawn to how false rumors emerge and spread. They confirmed what many people suspected: Mistaken online information tends to travel farther and faster than facts and corrections. “[B]reaking news” accounts often magnify rumors. People who fall for bogus storylines might correct themselves, but not before spreading them.
Those insights were jarring. But as Starbird and her peers turned to tracking the post-2020 attacks on America’s elections by Donald Trump, copycat Republicans and right-wing media, they were no longer looking at the dynamics of misinformation and disinformation from the safety of academia.
By scrutinizing millions of tweets, Facebook posts, YouTube videos and Instagram pages for misleading and unsubstantiated claims, and alerting platforms and federal officials about the most troubling examples, they found themselves in the crosshairs of Trump loyalists. The researchers were targeted and harassed much like election officials across the country.
Starbird was dragged before congressional inquisitions. Her university email was targeted. As the University of Washington and other universities were sued, she and her peers spent more time with lawyers than students, Starbird recounted in a keynote address at the Stanford Internet Observatory’s 2023 Trust and Safety Research Conference.
What was lost in the political heat was their field’s core insights — how mistaken or false beliefs take shape online and why they are so hard to shake. That dynamic involves the interplay of digital content and how we think and act.
Debunking disinformation is not the same as changing minds, the conference’s researchers explained. Their understanding emerged as online threats have evolved since the 2016 presidential election. That year, Russian operatives created fake personas and pages on Facebook and elsewhere to discourage key Democratic blocs from voting. Then, the scope of problem and solution mostly involved cybersecurity efforts, Starbird recounted.
At that time, the remedy was finding technical ways to quickly spot and shut down the forged accounts and pages. By 2020, the problem and its dynamics had shifted. The false narratives were coming from domestic sources. President Trump and his allies were real people using authentic social media accounts. Trump set the tone. Influencers — right-wing personalities, pundits and media outlets — followed his cues. Ordinary Americans not only believed their claims, and helped to spread them, but some Trump cultists spun stolen election clichés into vast conspiracies and even fabricated false evidence.
Scholars see disinformation as a participatory phenomenon. There is more going on than asserting that flawed or fake content is intentionally created, spread and reacted to, Starbird explained. To start, disinformation is not always entirely false. It often is a story built around a grain of truth or a plausible scenario, she said, but “layered with exaggerations and distortions to create a false sense of reality.”
Moreover, disinformation “rarely functions” as a single piece of content. It is part of a series of interactions or a campaign—such as Trump’s oft-repeated claim that elections are rigged. Crucially, while propaganda and disinformation are often talked about as being deceptive, Starbird said that “when you actually look at disinformation campaigns online, most of the content that spreads doesn’t come from people — those that are part of [the initial bogus campaign], it comes from unwitting actors or sincere believers in the content.”
These layered dynamics blur the lines between what is informational and what is psychological. Starbird cited an example from her research. In 2020 in Arizona, Trump supporters had been hearing for months that November’s election would be stolen. When they saw that some pens given to Phoenix-area voters bled through their paper ballot, that triggered fears and the so-called “Sharpiegate” conspiracy emerged and went viral. Officials explained that ballots were printed in such a way that no vote would be undetected or misread. But that barely dented the erroneous assumption and false conclusion that a presidential election was being stolen.
“Sometimes misinformation stands for false evidence, or even vague [evidence] or deep fakes, or whatever,” Starbird said. “But more often, misinformation comes in the form of misrepresentations, misinterpretations and mischaracterizations ... the frame that we use to interpret that evidence; and those frames are often strategically shaped by political actors and campaigns.”
Starbird was not the only researcher with insights into how and where mis- and disinformation are likely to surface in 2024. Eviane Leidig, a postdoctoral fellow at Tilburg University in the Netherlands, described another conduit where “personal radicalization and recruitment narratives” increasingly were found: influencer-led lifestyle websites. Personal testimonials can veer into political extremism.
Many factors shape beliefs, identity, community and a sense of belonging, said Cristina López G., a senior analyst at social media mapping firm Graphika, during her presentation on rabid online fandoms.
“Fandoms are really microcosms of the internet,” said López G, who studied arguments among Taylor Swift factions. “Their members are driven by the same thing as every internet user, which is just establish their dominance, and safeguard their community beliefs, and beliefs become really closely tied to who they are online ... who they identify as online.”
Those factors make changing minds more complicated than just presenting facts.
“What this means is that real-world events do very little to change this belief,” she concluded, adding this dynamic applied to politics. “If you change beliefs, you’re no longer part of that community. ... So, it’s not really, ‘what’s real?’ and ‘what’s not?’ It’s really the friends we’ve made along the way.”
“Once you begin to see this phenomenon through that lens [where disinformation is participatory and tribal], you realize it’s everywhere,” Starbird said.
The best hope was that people in their communities, who Leidig called “counter-influencers,” would speak out and slow deceptions from going viral. The election world calls that strategy finding “trusted voices.” But even if such voices are found and communicate to the “people who need to hear them,” as Benson put it, there’s no guarantee that they will be heeded — for all the reasons cited by the scholars.
Still, Starbird ended her Stanford address with a “call to action” in 2024.
“We’re not going to solve the problem with misinformation, disinformation, manipulation … with one new label, or a new educational initiative, or a new research program,” she said. “It’s going to have to be all of the above.”
This article was produced by Voting Booth, a project of the Independent Media Institute.