Skip to content
Search

Latest Stories

Top Stories

Elections workers must wake up to the risks posed by AI

Road sign that says "AI Ahead"
Bill Oxford/Getty Images

Sikora is a research assistant with the German Marshall Fund's Alliance for Securing Democracy. Gorman is the alliance’s senior fellow and head of the technology and geopolitics team; Levine is the senior elections integrity fellow.

Days before New Hampshire’s presidential primary, up to 25,000 Granite State voters received a mysterious call from “President Joe Biden.” He urged Democrats not to vote in the primary because it “only enables the Republicans in their quest to elect Donald Trump.” But Biden never said this. The recording was a digital fabrication generated by artificial intelligence.

This robocall incident is the highest-profile example of how AI could be weaponized to both disrupt and undermine this year’s presidential election, but it is merely a glimpse of the challenges election officials will confront. Election workers must be well-equipped to counter AI threats to ensure the integrity of this year’s election — and our organization, the Alliance for Securing Democracy at the German Marshall Fund of the United States, published a handbook to help them understand and defend against threats supercharged by AI.


Generative AI tools allow users to clone audio of anyone’s voice (saying nearly anything), produce photo-realistic images of anybody (doing nearly anything), and automate human-like writing without spelling errors or grammatical mistakes (in nearly any language). The widespread accessibility of these tools offers malign actors at home and abroad a new, low-cost weapon to launch sophisticated phishing attacks targeting election workers or to flood social media platforms with false or manipulated information that looks real. These tactics do not even need to be successful to sow discord; the mere perception that an attack occurred could cause widespread damage to Americans’ trust in the election.

These advancements come at a time when trust in U.S. elections is already alarmingly low. Less than half of Americans express substantial confidence that the votes in the 2024 presidential election will be counted accurately, with particular distrust among GOP voters. On top of that, election workers continue to face harassment, high-turnover, and onerous working environments often stemming from lies about election subterfuge. In an age of AI-driven manipulated information, the ability to readily fabricate images, audio and video to support election denialist narratives risks lending credence to — or at least creating further confusion around — such claims and inspiring real-world action that undermines elections.

What should election workers do to prepare for these threats? First, election officials need to incorporate AI risks into their election training and planning. Given election hazards old and new that AI can enable, it is necessary that election workers know the basics of what they are up against, can communicate to voters about AI challenges and are well-resourced to educate themselves further on these threats. To this end, election offices should consider forming a cybersecurity working group with AI expertise, adding AI-specific education to election worker training, and drafting talking points on AI. Likewise, simulating AI threats in mock elections or tabletop exercises could be invaluable in helping election officials plan responses to such threats.

Second, with hackers increasingly exploiting AI tools for cyberattacks, election officials have to double down on cybersecurity. Basic cybersecurity hygiene practices — such as enforcing user multi-factor authentication or using strong passwords like passphrases — can help protect against the vast majority of attacks. Unfortunately, however, many election jurisdictions are still well behind in implementing these simple protocols. Moreover, in the runup to the 2020 election, the FBI identified numerous fake election websites imitating federal and state elections sources using .com or .org domains. With generative AI increasingly able to produce realistic fake images and even web pages, .gov web addresses will become clear identifiers of authenticity and trust.

Finally, election officials should consider leveraging the responsible use of AI and other new technologies in their offices. Just as AI offers malign actors tools to undermine elections, the technology offers election officials instruments to ease operational burdens or even help them better defend our elections. Election offices can turn to generative AI to help with time-consuming tasks like drafting emails to prospective poll workers or populating spreadsheets with assignments. But before election workers rush to embrace AI technology, jurisdictions must create guidelines for their use, such as requiring robust human oversight. Likewise, election offices could consider piloting content provenance technologies that companies like OpenAI, Meta, and Google are already adopting; these technologies can help voters discern whether content from election offices is authentic.

This year’s presidential race will no doubt be a pivotal election. The proliferation of accessible AI technology will both magnify and ease malign actors’ abilities to push false election narratives and breach electoral systems. It is vital that the United States fortify its elections against threats that AI exacerbates. This starts with ensuring that election workers on the frontlines of democracy are equipped to meet these challenges.

Read More

Congress Must Not Undermine State Efforts To Regulate AI Harms to Children
Congress Must Not Undermine State Efforts To Regulate AI Harms to Children
Getty Images, Dmytro Betsenko

Congress Must Not Undermine State Efforts To Regulate AI Harms to Children

A cornerstone of conservative philosophy is that policy decisions should generally be left to the states. Apparently, this does not apply when the topic is artificial intelligence (AI).

In the name of promoting innovation, and at the urging of the tech industry, Congress quietly included in a 1,000-page bill a single sentence that has the power to undermine efforts to protect against the dangers of unfettered AI development. The sentence imposes a ten-year ban on state regulation of AI, including prohibiting the enforcement of laws already on the books. This brazen approach crossed the line even for conservative U.S. Representative Marjorie Taylor Greene, who remarked, “We have no idea what AI will be capable of in the next 10 years, and giving it free rein and tying states' hands is potentially dangerous.” She’s right. And it is especially dangerous for children.

Keep ReadingShow less
Microphones, podcast set up, podcast studio.

Many people inside and outside of the podcasting world are working to use the medium as a way to promote democracy and civic engagement.

Getty Images, Sergey Mironov

Ben Rhodes on How Podcasts Can Strengthen Democracy

After the 2024 election was deemed the “podcast election,” many people inside and outside of the podcasting world were left wondering how to capitalize on the medium as a way to promote democracy and civic engagement to audiences who are either burned out by or distrustful of traditional or mainstream news sources.

The Democracy Group podcast network has been working through this question since its founding in 2020—long before presidential candidates appeared on some of the most popular podcasts to appeal to specific demographics. Our members recently met in Washington, D.C., for our first convening to learn from each other and from high-profile podcasters like Jessica Tarlov, host of Raging Moderates, and Ben Rhodes, host of Pod Save the World.

Keep ReadingShow less
True Confessions of an AI Flip Flopper
Ai technology, Artificial Intelligence. man using technology smart robot AI, artificial intelligence by enter command prompt for generates something, Futuristic technology transformation.
Getty Images - stock photo

True Confessions of an AI Flip Flopper

A few years ago, I would have agreed with the argument that the most important AI regulatory issue is mitigating the low probability of catastrophic risks. Today, I’d think nearly the opposite. My primary concern is that we will fail to realize the already feasible and significant benefits of AI. What changed and why do I think my own evolution matters?

Discussion of my personal path from a more “safety” oriented perspective to one that some would label as an “accelerationist” view isn’t important because I, Kevin Frazier, have altered my views. The point of walking through my pivot is instead valuable because it may help those unsure of how to think about these critical issues navigate a complex and, increasingly, heated debate. By sharing my own change in thought, I hope others will feel welcomed to do two things: first, reject unproductive, static labels that are misaligned with a dynamic technology; and, second, adjust their own views in light of the wide variety of shifting variables at play when it comes to AI regulation. More generally, I believe that calling myself out for a so-called “flip-flop” may give others more leeway to do so without feeling like they’ve committed some wrong.

Keep ReadingShow less
People on their phones. ​

In order to achieve scale, many civic efforts must also reach Americans as media consumers, where Americans currently spend much more time.

Getty Images, Xavier Lorenzo

Reaching Americans As Media Consumers – Not Only As Participants – To Improve the Political Environment

Current efforts to improve how Americans think and feel about those across the political spectrum overwhelmingly rely on participation. Participation usually involves interpersonal interaction, mostly to have dialogues or to collectively work on a project together.

These can be valuable, but in order to achieve scale, many efforts must also reach Americans as media consumers, where Americans currently spend much more time.

Keep ReadingShow less