Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Slovakia’s election deep fakes show how AI could be a danger to U.S. elections

Slovakia’s election deep fakes show how AI could be a danger to U.S. elections

Election ballot boxes are prepared in Tomasova, Slovakia

Getty Images

Levine is the senior elections integrity fellow at the German Marshall Fund's Alliance for Securing Democracy, where he assesses vulnerabilities in electoral infrastructure, administration, and policies.

Savoia is a program assistant for the Alliance for Securing Democracy at GMF, where he serves as the lead author of ASD's weekly newsletter, the Securing Democracy Dispatch.


In the days leading up to Slovakia’s highly contested parliamentary election, deepfakes generated by artificial intelligence spread across social media. In one posted by the far-right Republika party, Progressive Slovakia leader Michal Šimečka apparently “ announced ” plans to raise the price of beer if elected. In a second, more worrisome fake audio recording, Šimečka “ discussed ” how his party will rig the election, including by buying votes from the country’s Roma minority.

Although Šimečka never said those words, it is unclear how many of the millions who heard the recordings across Facebook, TikTok, and Telegram knew that, although Slovak language fact-checkers did their best to debunk the clips.

While it is difficult to assess whether the deep fakes manipulated Slovak voters’ choices—and to what extent—it is clear that artificial intelligence is increasingly being used to target elections and could threaten future ones. To protect its elections, the United States must learn from Slovakia and bolster its ability to counter AI-generated disinformation threats before November 2024.

The threat of falsified information is not new for democracies, but artificial intelligence is likely to only compound these existing problems, particularly in the near-term. Authoritarian adversaries like Russia, China, and Iran will exploit different types of artificial intelligence to magnify their influence campaigns, as the Department of Homeland Security’s 2024 Homeland Threat Assessment recently warned. With this technology becoming widespread, a greater number of actors are able—and, in some cases, have already begun—to create falsified audio and video material with a potentially greater ability to mislead voters than textual disinformation.

In hyperpolarized societies like the United States, AI-generated disinformation may undermine voters’ ability to make informed judgments before elections. Deepfakes that purport to show corrupt dealings or election rigging behind closed doors—like those seen in Slovakia—could increase voter apathy and undermine faith in democracy, especially for a U.S. audience already awash in baseless claims of election fraud. Finally, different kinds of AI tools, such as chatbots and deepfake images, audio, and video could make it harder for U.S. voters to reject content designed to be manipulative, which could raise questions about the legitimacy of elections, especially those that are closely contested.

There has already been acknowledgment of risks from artificial intelligence in the United States. The U.S. Senate Rules Committee recently held a hearing on AI-related threats to elections and bills have been introduced in both chambers to address disclosures in political ads. The White House published the blueprint for an “AI Bill of Rights” to govern the technology’s development and use. On the state level, bills continue to be proposed and passed on the matter.

However, there is still much more that can be done before the 2024 presidential election to safeguard the vote. First, the U.S. Congress should mandate that large social media platforms like Facebook, X (formerly Twitter), and TikTok require labels on all AI-generated content and remove posts that fail to disclose this. Even if AI-generated content were to spread, users would be warned of its origin via a clear marking. Congress could learn from the European Union’s (EU) approach on this front. Earlier this year, the EU passed the Digital Services Act, a comprehensive regulation that mandates similar labels for deep fakes.

Second, political campaigns should pledge to label all AI-generated content in ads and other official communications. On the platform front, Google, and YouTube already require that ads using AI-generated voice and imagery be clearly labeled. Campaigns should also avoid using artificial intelligence to mimic a political opponent’s voice or likeness—as happened in Slovakia, but also in the United States, Poland, and elsewhere—as this portrays them as saying words or performing actions they did not actually say or do. In the longer term, the U.S. Congress should pass legislation requiring this sort of disclosure.

Lastly, journalists and newsrooms should develop clear guidelines on how to cover AI-generated content. This could be done, in part, by consulting with AI experts and building sources with people who audit AI systems, talking with academics who study the data, conversing with technologists who work at the companies who developed the tools, and meeting with regulators who see these tools through a different lens. Journalists could also try to look at the human data scooped up to train these models and the people who made choices to optimize them. Outlets should also seek to educate listeners about how to identify AI-generated content.

If Slovakia’s example is any indication, the United States and other democracies must take AI-generated disinformation seriously. An open information space is key to democracy, making it important to protect from this sort of willful manipulation.


Read More

Trump’s Anti-Latino Racism is a Major Liability for Democracy

Close-up of sign reading 'Immigrants Make America Great' at a Baltimore rally.

Trump’s Anti-Latino Racism is a Major Liability for Democracy

Donald Trump’s second administration has fully clarified Latinos’ racial position in America: our ethnic group’s labor, culture, and aspirations are too much for his supporters to stomach. The Latino presence in America triggers too many uneasy questions (are they White?), too many doubts (are they really American?), and too much resentment (why are they doing better than me?).

Trump’s targeted deportations of undocumented Latinos, unwarranted arrests of Latino citizens, and heightened ICE presence in Latino neighborhoods address these worries by lumping Latinos with Black people. Simply put, we have become yet another visible population that America socially stigmatizes, economically exploits, and politically terrorizes because aggrieved White adults want to preserve their rank as our nation’s premier racial group. The cumulative impacts are serious: just yesterday, an international panel of investigators on human rights and racism, backed by the U.N., found that such actions have resulted in “grave human rights violations.”

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Team Trump had to start a war to learn how the global economy works

U.S. President Donald Trump speaks to reporters before boarding Air Force One at Palm Beach International Airport on Monday, March 23, 2026, in West Palm Beach, Fla.

(Roberto Schmidt/Getty Images/TNS)

Team Trump had to start a war to learn how the global economy works

Early Monday morning of March 23, financial markets surged when President Donald Trump claimed there had been productive talks with Iran about ending the war. Therefore he backed off a vow to bomb Iranian power plants if the Strait of Hormuz wasn’t reopened by Monday evening. Iran denies any such talks actually took place.

This is a rare moment in which reasonable people can be torn about which government is more believable.

Keep ReadingShow less