Skip to content
Search

Latest Stories

Top Stories

Slovakia’s election deep fakes show how AI could be a danger to U.S. elections

Slovakia’s election deep fakes show how AI could be a danger to U.S. elections

Election ballot boxes are prepared in Tomasova, Slovakia

Getty Images

Levine is the senior elections integrity fellow at the German Marshall Fund's Alliance for Securing Democracy, where he assesses vulnerabilities in electoral infrastructure, administration, and policies.

Savoia is a program assistant for the Alliance for Securing Democracy at GMF, where he serves as the lead author of ASD's weekly newsletter, the Securing Democracy Dispatch.


In the days leading up to Slovakia’s highly contested parliamentary election, deepfakes generated by artificial intelligence spread across social media. In one posted by the far-right Republika party, Progressive Slovakia leader Michal Šimečka apparently “announced” plans to raise the price of beer if elected. In a second, more worrisome fake audio recording, Šimečka “discussed” how his party will rig the election, including by buying votes from the country’s Roma minority.

Although Šimečka never said those words, it is unclear how many of the millions who heard the recordings across Facebook, TikTok, and Telegram knew that, although Slovak language fact-checkers did their best to debunk the clips.

While it is difficult to assess whether the deep fakes manipulated Slovak voters’ choices—and to what extent—it is clear that artificial intelligence is increasingly being used to target elections and could threaten future ones. To protect its elections, the United States must learn from Slovakia and bolster its ability to counter AI-generated disinformation threats before November 2024.

Sign up for The Fulcrum newsletter

The threat of falsified information is not new for democracies, but artificial intelligence is likely to only compound these existing problems, particularly in the near-term. Authoritarian adversaries like Russia, China, and Iran will exploit different types of artificial intelligence to magnify their influence campaigns, as the Department of Homeland Security’s 2024 Homeland Threat Assessment recently warned. With this technology becoming widespread, a greater number of actors are able—and, in some cases, have already begun—to create falsified audio and video material with a potentially greater ability to mislead voters than textual disinformation.

In hyperpolarized societies like the United States, AI-generated disinformation may undermine voters’ ability to make informed judgments before elections. Deepfakes that purport to show corrupt dealings or election rigging behind closed doors—like those seen in Slovakia—could increase voter apathy and undermine faith in democracy, especially for a U.S. audience already awash in baseless claims of election fraud. Finally, different kinds of AI tools, such as chatbots and deepfake images, audio, and video could make it harder for U.S. voters to reject content designed to be manipulative, which could raise questions about the legitimacy of elections, especially those that are closely contested.

There has already been acknowledgment of risks from artificial intelligence in the United States. The U.S. Senate Rules Committee recently held a hearing on AI-related threats to elections and bills have been introduced in both chambers to address disclosures in political ads. The White House published the blueprint for an “AI Bill of Rights” to govern the technology’s development and use. On the state level, bills continue to be proposed and passed on the matter.

However, there is still much more that can be done before the 2024 presidential election to safeguard the vote. First, the U.S. Congress should mandate that large social media platforms like Facebook, X (formerly Twitter), and TikTok require labels on all AI-generated content and remove posts that fail to disclose this. Even if AI-generated content were to spread, users would be warned of its origin via a clear marking. Congress could learn from the European Union’s (EU) approach on this front. Earlier this year, the EU passed the Digital Services Act, a comprehensive regulation that mandates similar labels for deep fakes.

Second, political campaigns should pledge to label all AI-generated content in ads and other official communications. On the platform front, Google, and YouTube already require that ads using AI-generated voice and imagery be clearly labeled. Campaigns should also avoid using artificial intelligence to mimic a political opponent’s voice or likeness—as happened in Slovakia, but also in the United States, Poland, and elsewhere—as this portrays them as saying words or performing actions they did not actually say or do. In the longer term, the U.S. Congress should pass legislation requiring this sort of disclosure.

Lastly, journalists and newsrooms should develop clear guidelines on how to cover AI-generated content. This could be done, in part, by consulting with AI experts and building sources with people who audit AI systems, talking with academics who study the data, conversing with technologists who work at the companies who developed the tools, and meeting with regulators who see these tools through a different lens. Journalists could also try to look at the human data scooped up to train these models and the people who made choices to optimize them. Outlets should also seek to educate listeners about how to identify AI-generated content.

If Slovakia’s example is any indication, the United States and other democracies must take AI-generated disinformation seriously. An open information space is key to democracy, making it important to protect from this sort of willful manipulation.

Read More

Business professional watching stocks go down.
Getty Images, Bartolome Ozonas

The White House Is Booming, the Boardroom Is Panicking

The Confidence Collapse

Consumer confidence is plummeting—and that was before the latest Wall Street selloffs.

Keep ReadingShow less
Drain—More Than Fight—Authoritarianism and Censorship
Getty Images, Mykyta Ivanov

Drain—More Than Fight—Authoritarianism and Censorship

The current approaches to proactively counteracting authoritarianism and censorship fall into two main categories, which we call “fighting” and “Constitution-defending.” While Constitution-defending in particular has some value, this article advocates for a third major method: draining interest in authoritarianism and censorship.

“Draining” refers to sapping interest in these extreme possibilities of authoritarianism and censorship. In practical terms, it comes from reducing an overblown sense of threat of fellow Americans across the political spectrum. When there is less to fear about each other, there is less desire for authoritarianism or censorship.

Keep ReadingShow less
"Vote" pin.
Getty Images, William Whitehurst

Most Americans’ Votes Don’t Matter in Deciding Elections

New research from the Unite America Institute confirms a stark reality: Most ballots cast in American elections don’t matter in deciding the outcome. In 2024, just 14% of eligible voters cast a meaningful vote that actually influenced the outcome of a U.S. House race. For state house races, on average across all 50 states, just 13% cast meaningful votes.

“Too many Americans have no real say in their democracy,” said Unite America Executive Director Nick Troiano. “Every voter deserves a ballot that not only counts, but that truly matters. We should demand better than ‘elections in name only.’”

Keep ReadingShow less
Hands outside of bars.
Getty Images, stevanovicigor

Double Standard: Investing in Animal Redemption While Ignoring Human Rehabilitation

America and countries abroad have mastered the art of taming wild animals—training the most vicious killers, honing killer instincts, and even domesticating animals born for the hunt. Wild animals in this country receive extensive resources to facilitate their reintegration into society.

Americans spent more than $150 billion on their pets in 2024, with an estimated spending projection of $200 million by 2030. Millions of dollars are poured into shelters, rehabilitation programs, and veterinary care, as shown by industry statistics on animal welfare spending. Television ads and commercials plead for their adoption. Stray animal hotlines operate 24/7, ensuring immediate rescue services. Pet parks, relief stations in airports, and pageant shows showcase animals as celebrities.

Keep ReadingShow less