Skip to content
Search

Latest Stories

Top Stories

Elections workers must wake up to the risks posed by AI

Road sign that says "AI Ahead"
Bill Oxford/Getty Images

Sikora is a research assistant with the German Marshall Fund's Alliance for Securing Democracy. Gorman is the alliance’s senior fellow and head of the technology and geopolitics team; Levine is the senior elections integrity fellow.

Days before New Hampshire’s presidential primary, up to 25,000 Granite State voters received a mysterious call from “President Joe Biden.” He urged Democrats not to vote in the primary because it “only enables the Republicans in their quest to elect Donald Trump.” But Biden never said this. The recording was a digital fabrication generated by artificial intelligence.

This robocall incident is the highest-profile example of how AI could be weaponized to both disrupt and undermine this year’s presidential election, but it is merely a glimpse of the challenges election officials will confront. Election workers must be well-equipped to counter AI threats to ensure the integrity of this year’s election — and our organization, the Alliance for Securing Democracy at the German Marshall Fund of the United States, published a handbook to help them understand and defend against threats supercharged by AI.


Generative AI tools allow users to clone audio of anyone’s voice (saying nearly anything), produce photo-realistic images of anybody (doing nearly anything), and automate human-like writing without spelling errors or grammatical mistakes (in nearly any language). The widespread accessibility of these tools offers malign actors at home and abroad a new, low-cost weapon to launch sophisticated phishing attacks targeting election workers or to flood social media platforms with false or manipulated information that looks real. These tactics do not even need to be successful to sow discord; the mere perception that an attack occurred could cause widespread damage to Americans’ trust in the election.

These advancements come at a time when trust in U.S. elections is already alarmingly low. Less than half of Americans express substantial confidence that the votes in the 2024 presidential election will be counted accurately, with particular distrust among GOP voters. On top of that, election workers continue to face harassment, high-turnover, and onerous working environments often stemming from lies about election subterfuge. In an age of AI-driven manipulated information, the ability to readily fabricate images, audio and video to support election denialist narratives risks lending credence to — or at least creating further confusion around — such claims and inspiring real-world action that undermines elections.

What should election workers do to prepare for these threats? First, election officials need to incorporate AI risks into their election training and planning. Given election hazards old and new that AI can enable, it is necessary that election workers know the basics of what they are up against, can communicate to voters about AI challenges and are well-resourced to educate themselves further on these threats. To this end, election offices should consider forming a cybersecurity working group with AI expertise, adding AI-specific education to election worker training, and drafting talking points on AI. Likewise, simulating AI threats in mock elections or tabletop exercises could be invaluable in helping election officials plan responses to such threats.

Second, with hackers increasingly exploiting AI tools for cyberattacks, election officials have to double down on cybersecurity. Basic cybersecurity hygiene practices — such as enforcing user multi-factor authentication or using strong passwords like passphrases — can help protect against the vast majority of attacks. Unfortunately, however, many election jurisdictions are still well behind in implementing these simple protocols. Moreover, in the runup to the 2020 election, the FBI identified numerous fake election websites imitating federal and state elections sources using .com or .org domains. With generative AI increasingly able to produce realistic fake images and even web pages, .gov web addresses will become clear identifiers of authenticity and trust.

Finally, election officials should consider leveraging the responsible use of AI and other new technologies in their offices. Just as AI offers malign actors tools to undermine elections, the technology offers election officials instruments to ease operational burdens or even help them better defend our elections. Election offices can turn to generative AI to help with time-consuming tasks like drafting emails to prospective poll workers or populating spreadsheets with assignments. But before election workers rush to embrace AI technology, jurisdictions must create guidelines for their use, such as requiring robust human oversight. Likewise, election offices could consider piloting content provenance technologies that companies like OpenAI, Meta, and Google are already adopting; these technologies can help voters discern whether content from election offices is authentic.

This year’s presidential race will no doubt be a pivotal election. The proliferation of accessible AI technology will both magnify and ease malign actors’ abilities to push false election narratives and breach electoral systems. It is vital that the United States fortify its elections against threats that AI exacerbates. This starts with ensuring that election workers on the frontlines of democracy are equipped to meet these challenges.

Read More

The American Schism in 2025: The New Cultural Revolution

A street vendor selling public domain Donald Trump paraphernalia and souvenirs. The souvenirs are located right across the street from the White House and taken on the afternoon of July 21, 2019 near Pennslyvania Avenue in Washington, D.C.

Getty Images, P_Wei

The American Schism in 2025: The New Cultural Revolution

A common point of bewilderment today among many of Trump’s “establishment” critics is the all too tepid response to Trump’s increasingly brazen shattering of democratic norms. True, he started this during his first term, but in his second, Trump seems to relish the weaponization of his presidency to go after his enemies and to brandish his corrupt dealings, all under the Trump banner (e.g. cyber currency, Mideast business dealings, the Boeing 747 gift from Qatar). Not only does Trump conduct himself with impunity but Fox News and other mainstream media outlets barely cover them at all. (And when left-leaning media do, the interest seems to wane quickly.)

Here may be the source of the puzzlement: the left intelligentsia continues to view and characterize MAGA as a political movement, without grasping its transcendence into a new dominant cultural order. MAGA rose as a counter-establishment partisan drive during Trump’s 2016 campaign and subsequent first administration; however, by the 2024 election, it became evident that MAGA was but the eye of a full-fledged cultural shift, in some ways akin to Mao’s Cultural Revolution.

Keep ReadingShow less
Should States Regulate AI?

Rep. Jay Obernolte, R-CA, speaks at an AI conference on Capitol Hill with experts

Provided

Should States Regulate AI?

WASHINGTON —- As House Republicans voted Thursday to pass a 10-year moratorium on AI regulation by states, Rep. Jay Obernolte, R-CA, and AI experts said the measure would be necessary to ensure US dominance in the industry.

“We want to make sure that AI continues to be led by the United States of America, and we want to make sure that our economy and our society realizes the potential benefits of AI deployment,” Obernolte said.

Keep ReadingShow less
The AI Race We Need: For a Better Future, Not Against Another Nation

The concept of AI hovering among the public.

Getty Images, J Studios

The AI Race We Need: For a Better Future, Not Against Another Nation

The AI race that warrants the lion’s share of our attention and resources is not the one with China. Both superpowers should stop hurriedly pursuing AI advances for the sake of “beating” the other. We’ve seen such a race before. Both participants lose. The real race is against an unacceptable status quo: declining lifespans, increasing income inequality, intensifying climate chaos, and destabilizing politics. That status quo will drag on, absent the sorts of drastic improvements AI can bring about. AI may not solve those problems but it may accelerate our ability to improve collective well-being. That’s a race worth winning.

Geopolitical races have long sapped the U.S. of realizing a better future sooner. The U.S. squandered scarce resources and diverted talented staff to close the alleged missile gap with the USSR. President Dwight D. Eisenhower rightfully noted, “Every gun that is made, every warship launched, every rocket fired signifies, in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed.” He realized that every race comes at an immense cost. In this case, the country was “spending the sweat of its laborers, the genius of its scientists, the hopes of its children.”

Keep ReadingShow less
Closeup of Software engineering team engaged in problem-solving and code analysis

Closeup of Software engineering team engaged in problem-solving and code analysis.

Getty Images, MTStock Studio

AI Is Here. Our Laws Are Stuck in the Past.

Artificial intelligence (AI) promises a future once confined to science fiction: personalized medicine accounting for your specific condition, accelerated scientific discovery addressing the most difficult challenges, and reimagined public education designed around AI tutors suited to each student's learning style. We see glimpses of this potential on a daily basis. Yet, as AI capabilities surge forward at exponential speed, the laws and regulations meant to guide them remain anchored in the twentieth century (if not the nineteenth or eighteenth!). This isn't just inefficient; it's dangerously reckless.

For too long, our approach to governing new technologies, including AI, has been one of cautious incrementalism—trying to fit revolutionary tools into outdated frameworks. We debate how century-old privacy torts apply to vast AI training datasets, how liability rules designed for factory machines might cover autonomous systems, or how copyright law conceived for human authors handles AI-generated creations. We tinker around the edges, applying digital patches to analog laws.

Keep ReadingShow less