Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Elections workers must wake up to the risks posed by AI

Opinion

Road sign that says "AI Ahead"
Bill Oxford/Getty Images

Sikora is a research assistant with the German Marshall Fund's Alliance for Securing Democracy. Gorman is the alliance’s senior fellow and head of the technology and geopolitics team; Levine is the senior elections integrity fellow.

Days before New Hampshire’s presidential primary, up to 25,000 Granite State voters received a mysterious call from “President Joe Biden.” He urged Democrats not to vote in the primary because it “only enables the Republicans in their quest to elect Donald Trump.” But Biden never said this. The recording was a digital fabrication generated by artificial intelligence.

This robocall incident is the highest-profile example of how AI could be weaponized to both disrupt and undermine this year’s presidential election, but it is merely a glimpse of the challenges election officials will confront. Election workers must be well-equipped to counter AI threats to ensure the integrity of this year’s election — and our organization, the Alliance for Securing Democracy at the German Marshall Fund of the United States, published a handbook to help them understand and defend against threats supercharged by AI.


Generative AI tools allow users to clone audio of anyone’s voice (saying nearly anything), produce photo-realistic images of anybody (doing nearly anything), and automate human-like writing without spelling errors or grammatical mistakes (in nearly any language). The widespread accessibility of these tools offers malign actors at home and abroad a new, low-cost weapon to launch sophisticated phishing attacks targeting election workers or to flood social media platforms with false or manipulated information that looks real. These tactics do not even need to be successful to sow discord; the mere perception that an attack occurred could cause widespread damage to Americans’ trust in the election.

These advancements come at a time when trust in U.S. elections is already alarmingly low. Less than half of Americans express substantial confidence that the votes in the 2024 presidential election will be counted accurately, with particular distrust among GOP voters. On top of that, election workers continue to face harassment, high-turnover, and onerous working environments often stemming from lies about election subterfuge. In an age of AI-driven manipulated information, the ability to readily fabricate images, audio and video to support election denialist narratives risks lending credence to — or at least creating further confusion around — such claims and inspiring real-world action that undermines elections.

What should election workers do to prepare for these threats? First, election officials need to incorporate AI risks into their election training and planning. Given election hazards old and new that AI can enable, it is necessary that election workers know the basics of what they are up against, can communicate to voters about AI challenges and are well-resourced to educate themselves further on these threats. To this end, election offices should consider forming a cybersecurity working group with AI expertise, adding AI-specific education to election worker training, and drafting talking points on AI. Likewise, simulating AI threats in mock elections or tabletop exercises could be invaluable in helping election officials plan responses to such threats.

Second, with hackers increasingly exploiting AI tools for cyberattacks, election officials have to double down on cybersecurity. Basic cybersecurity hygiene practices — such as enforcing user multi-factor authentication or using strong passwords like passphrases — can help protect against the vast majority of attacks. Unfortunately, however, many election jurisdictions are still well behind in implementing these simple protocols. Moreover, in the runup to the 2020 election, the FBI identified numerous fake election websites imitating federal and state elections sources using .com or .org domains. With generative AI increasingly able to produce realistic fake images and even web pages, .gov web addresses will become clear identifiers of authenticity and trust.

Finally, election officials should consider leveraging the responsible use of AI and other new technologies in their offices. Just as AI offers malign actors tools to undermine elections, the technology offers election officials instruments to ease operational burdens or even help them better defend our elections. Election offices can turn to generative AI to help with time-consuming tasks like drafting emails to prospective poll workers or populating spreadsheets with assignments. But before election workers rush to embrace AI technology, jurisdictions must create guidelines for their use, such as requiring robust human oversight. Likewise, election offices could consider piloting content provenance technologies that companies like OpenAI, Meta, and Google are already adopting; these technologies can help voters discern whether content from election offices is authentic.

This year’s presidential race will no doubt be a pivotal election. The proliferation of accessible AI technology will both magnify and ease malign actors’ abilities to push false election narratives and breach electoral systems. It is vital that the United States fortify its elections against threats that AI exacerbates. This starts with ensuring that election workers on the frontlines of democracy are equipped to meet these challenges.


Read More

An illustration of a person standing alone on a platform and looking at speech bubbles.

A bold critique of modern democracy and rising authoritarian ideas, exploring how AI-powered swarm digital democracy could redefine participation and governance.

Getty Images, Andriy Onufriyenko

The Only Radical Move Forward: Swarm Digital Democracy

We are increasingly told that democracy has failed and that its time has passed. The evidence proffered is everywhere, we are told: Gridlock, captured institutions, performative elections, a public that senses, correctly, that its voice rarely translates into real power. Into this vacuum step dystopic movements like the Dark Enlightenment and harder strains of Right-wing populism, offering a stark diagnosis and an even starker cure: Abandon the illusion of popular rule and return to forms of authority that are decisive, hierarchical, and unapologetically exclusionary. They present themselves as bold, clear-eyed, rambunctious, alive, and willing to act where others hesitate. And all to save the world from itself.

But this framing depends on a sleight of hand: It assumes that what we have been living under is, in fact, democracy, and that its failures are the failures of democracy itself. That is the first mistake.

Keep ReadingShow less
An illustration of orange-colored megaphones, one megaphone in the middle is red and facing the opposite direction of the others.

A growing crisis threatens U.S. public data. Experts warn disappearing federal datasets could undermine science, policy, and democracy—and outline a plan to protect them.

Getty Images, Richard Drury

America's Data Crisis: Saving Trusted Facts Is Essential to Democracy

In March 2026, more than a hundred information and data experts gathered in a converted Christian Science church to confront a problem most Americans never see, but that shapes nearly every public debate we have. The nonprofit Internet Archive convened this national Information Stewardship Forum at their San Francisco headquarters because something fundamental is breaking: the country’s shared foundation of facts.

For decades, the United States has relied on a vast ecosystem of federal data on health, climate, the economy, education, demographics, scientific research, and more. This data is the backbone of journalism, policymaking, scientific discovery, and public accountability. It is how we know whether the air is safe to breathe, whether unemployment is rising or falling, whether a new disease is spreading, or whether a community is being left behind.

Keep ReadingShow less
Man lying in his bed, on his phone at night.

As the 2026 election approaches, doomscrolling and social media are shaping voter behavior through fear and anxiety. Learn how digital news consumption influences political decisions—and how to break the cycle for more informed voting.

Getty Images, gorodenkoff

Americans Are Doomscrolling Their Way to the Ballot Box and Only Getting Empty Promises

As the spring primary cycle ramps up, voters are deciding which candidates to elect in the November general election, but too much doomscrolling on social media is leading to uninformed — and often anxiety-based — voting. Even though online platforms and politicians may be preying on our exhaustion to further their agendas, we don’t have to fall for it this election cycle.

Doomscrolling is, unfortunately, part of daily life for many of us. It involves consuming a virtually endless amount of negative social media posts and news content, causing us to feel scared and depressed. Our brains have a hardwired negativity bias that causes us to notice potential threats and focus on them. This is exacerbated by the fact that people who closely follow or participate in politics are more likely to doomscroll.

Keep ReadingShow less
The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less