Skip to content
Search

Latest Stories

Top Stories

Elections workers must wake up to the risks posed by AI

Opinion

Road sign that says "AI Ahead"
Bill Oxford/Getty Images

Sikora is a research assistant with the German Marshall Fund's Alliance for Securing Democracy. Gorman is the alliance’s senior fellow and head of the technology and geopolitics team; Levine is the senior elections integrity fellow.

Days before New Hampshire’s presidential primary, up to 25,000 Granite State voters received a mysterious call from “President Joe Biden.” He urged Democrats not to vote in the primary because it “only enables the Republicans in their quest to elect Donald Trump.” But Biden never said this. The recording was a digital fabrication generated by artificial intelligence.

This robocall incident is the highest-profile example of how AI could be weaponized to both disrupt and undermine this year’s presidential election, but it is merely a glimpse of the challenges election officials will confront. Election workers must be well-equipped to counter AI threats to ensure the integrity of this year’s election — and our organization, the Alliance for Securing Democracy at the German Marshall Fund of the United States, published a handbook to help them understand and defend against threats supercharged by AI.


Generative AI tools allow users to clone audio of anyone’s voice (saying nearly anything), produce photo-realistic images of anybody (doing nearly anything), and automate human-like writing without spelling errors or grammatical mistakes (in nearly any language). The widespread accessibility of these tools offers malign actors at home and abroad a new, low-cost weapon to launch sophisticated phishing attacks targeting election workers or to flood social media platforms with false or manipulated information that looks real. These tactics do not even need to be successful to sow discord; the mere perception that an attack occurred could cause widespread damage to Americans’ trust in the election.

These advancements come at a time when trust in U.S. elections is already alarmingly low. Less than half of Americans express substantial confidence that the votes in the 2024 presidential election will be counted accurately, with particular distrust among GOP voters. On top of that, election workers continue to face harassment, high-turnover, and onerous working environments often stemming from lies about election subterfuge. In an age of AI-driven manipulated information, the ability to readily fabricate images, audio and video to support election denialist narratives risks lending credence to — or at least creating further confusion around — such claims and inspiring real-world action that undermines elections.

What should election workers do to prepare for these threats? First, election officials need to incorporate AI risks into their election training and planning. Given election hazards old and new that AI can enable, it is necessary that election workers know the basics of what they are up against, can communicate to voters about AI challenges and are well-resourced to educate themselves further on these threats. To this end, election offices should consider forming a cybersecurity working group with AI expertise, adding AI-specific education to election worker training, and drafting talking points on AI. Likewise, simulating AI threats in mock elections or tabletop exercises could be invaluable in helping election officials plan responses to such threats.

Second, with hackers increasingly exploiting AI tools for cyberattacks, election officials have to double down on cybersecurity. Basic cybersecurity hygiene practices — such as enforcing user multi-factor authentication or using strong passwords like passphrases — can help protect against the vast majority of attacks. Unfortunately, however, many election jurisdictions are still well behind in implementing these simple protocols. Moreover, in the runup to the 2020 election, the FBI identified numerous fake election websites imitating federal and state elections sources using .com or .org domains. With generative AI increasingly able to produce realistic fake images and even web pages, .gov web addresses will become clear identifiers of authenticity and trust.

Finally, election officials should consider leveraging the responsible use of AI and other new technologies in their offices. Just as AI offers malign actors tools to undermine elections, the technology offers election officials instruments to ease operational burdens or even help them better defend our elections. Election offices can turn to generative AI to help with time-consuming tasks like drafting emails to prospective poll workers or populating spreadsheets with assignments. But before election workers rush to embrace AI technology, jurisdictions must create guidelines for their use, such as requiring robust human oversight. Likewise, election offices could consider piloting content provenance technologies that companies like OpenAI, Meta, and Google are already adopting; these technologies can help voters discern whether content from election offices is authentic.

This year’s presidential race will no doubt be a pivotal election. The proliferation of accessible AI technology will both magnify and ease malign actors’ abilities to push false election narratives and breach electoral systems. It is vital that the United States fortify its elections against threats that AI exacerbates. This starts with ensuring that election workers on the frontlines of democracy are equipped to meet these challenges.

Read More

“There is a real public hunger for accurate, local, fact-based information”

Monica Campbell

Credit Ximena Natera

“There is a real public hunger for accurate, local, fact-based information”

At a time when democracy feels fragile and newsrooms are shrinking, Monica Campbell has spent her career asking how journalism can still serve the public good. She is Director of the California Local News Fellowship at the University of California, Berkeley, and a former editor at The Washington Post and The World. Her work has focused on press freedom, disinformation, and the civic role of journalism. In this conversation, she reflects on the state of free press in the United States, what she learned reporting in Latin America, and what still gives her hope for the future of the profession.

You have worked in both international and U.S. journalism for decades. How would you describe the current state of press freedom in the United States?

Keep ReadingShow less
Person on a smartphone.

The digital public square rewards outrage over empathy. To save democracy, we must redesign our online spaces to prioritize dialogue, trust, and civility.

Getty Images, Tiwaporn Khemwatcharalerd

Rebuilding Civic Trust in the Age of Algorithmic Division

A headline about a new education policy flashes across a news-aggregation app. Within minutes, the comment section fills: one reader suggests the proposal has merit; a dozen others pounce. Words like idiot, sheep, and propaganda fly faster than the article loads. No one asks what the commenter meant. The thread scrolls on—another small fire in a forest already smoldering.

It’s a small scene, but it captures something larger: how the public square has turned reactive by design. The digital environments where citizens now meet were built to reward intensity, not inquiry. Each click, share, and outrage serves an invisible metric that prizes attention over understanding.

Keep ReadingShow less
A woman typing on her laptop.

Pop-ups on federal websites blaming Democrats for the shutdown spark Hatch Act concerns, raising questions about neutrality in government communications.

Getty Images, Igor Suka

When Federal Websites Get Political: The Hatch Act in the Digital Age

As the federal government entered a shutdown on October 1st, a new controversy emerged over how federal agencies communicate during political standoffs. Pop-ups and banners appeared on agency websites blaming one side of Congress for the funding lapse, prompting questions about whether such messaging violated federal rules meant to keep government communications neutral. The episode has drawn bipartisan concern and renewed scrutiny of the Hatch Act, a 1939 law that governs political activity in federal workplaces.

The Shutdown and Federal Website Pop-ups

The government shutdown began after negotiations over the federal budget collapsed. Republicans, who control both chambers of Congress, needed Democratic support in the Senate to pass a series of funding bills, or Continuing Resolutions, but failed to reach an agreement before the deadline. In the hours before the shutdown took effect, the Department of Housing and Urban Development, or HUD, posted a full-screen red banner stating, “The Radical Left in Congress shut down the government. HUD will use available resources to help Americans in need.” Users could not access the website until clicking through the message.

Keep ReadingShow less
Congress Must Lead On AI While It Still Can
a computer chip with the letter a on top of it
Photo by Igor Omilaev on Unsplash

Congress Must Lead On AI While It Still Can

Last month, Matthew and Maria Raine testified before Congress, describing how their 16-year-old son confided suicidal thoughts to AI chatbots, only to be met with validation, encouragement, and even help drafting a suicide note. The Raines are among multiple families who have recently filed lawsuits alleging that AI chatbots were responsible for their children’s suicides. Their deaths, now at the center of lawsuits against AI companies, underscore a similar argument playing out in federal courts: artificial intelligence is no longer an abstraction of the future; it is already shaping life and death.

And these teens are not outliers. According to Common Sense Media, a nonprofit dedicated to improving the lives of kids and families, 72 percent of teenagers report using AI companions, often relying on them for emotional support. This dependence is developing far ahead of any emerging national safety standard.

Keep ReadingShow less