Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The U.S. election system was already wobbling, and now here comes AI

people voting

In some states, more than half of election officials have quit, writes Klug.

Brett Deering/Getty Images

Klug served in the House of Representatives from 1991 to 1999. He hosts the political podcast “ Lost in the Middle: America’s Political Orphans.”

As we head into election season, the potential for misinformation is enormous and the ability of election officials to respond to artificial intelligence is limited.

The new technology arrives at a time when we still haven’t gotten our arms around social media threats.


“The ability to react at the pace that's being developed is almost impossible,” worries Idaho Secretary of State Phil McGrane, a Republican. “By design, our system is meant to be slow and methodical.”

While deep fakes get all the attention, the truth is the threat arrives at a time when election administration itself is shaky. Election officials caught the brunt of the mistrust. In some states more than half of them have quit.

“I have a little PTSD, as do my coworkers,” said Nick Lima, who heads up elections in Cranston, R.I., and who — with some reservations — decided to keep the job he loves. “During election season, you know, you really feel the pressure, you feel your heartbeat increasing a bit.”

Today everyone who works in the campaign infrastructure faces unending scrutiny. If you thought it was easier in a red state, you are mistaken.

“Just the act of standing behind you watching you work just puts you on edge,” said McGrane. “Now you start second-guessing yourself, even if you know you're doing it right. The poll workers don't know about cyber security on voting equipment, but your poll watcher is getting asked these questions. “

Deep fakes are one level of concern, but Edward Perez, who had been director of civic integrity at Twitter and is now a board member at the OSET Institute (whose mission is to re-build public confidence in our voting system), worries about the misuse of AI to disrupt the backroom of every American precinct.

“One of the most important things to understand about election administration is, it’s very, very process oriented. And there’s a tremendous number of layers,” he said. “Are we talking about voter registration? About the security of election administration? All of this technology is never deployed just in a vacuum.”

The fact that the election system is a conglomeration of different rules and regulations from 50 different states with 50 different voting rules adds to the complexity. The challenge is serious as election officials scramble in this election-denying climate to staff 132,000 polling places with 775,000 volunteers. The clock is ticking to deploy the necessary defenses against threats that aren’t fully understood.

From hanging chads to deep fake videos, American democracy wobbles by Scott Klug

Read on Substack

Read More

An illustration of a block with the words, "AI," on it, surrounded by slightly smaller caution signs.

The future of AI should be measured by its impact on ordinary Americans—not just tech executives and investors. Exploring AI inequality, labor concerns, and responsible innovation.

Getty Images, J Studios

The Kayla Test: Exploring How AI Impacts Everyday Americans

We’re failing the Kayla Test and running out of time to pass it. Whether AI goes “well” for the country is not a question anyone in SF or DC can answer. To assess whether AI is truly advancing the interests of Americans, AI stakeholders must engage with more than power users, tokenmaxxers, and Fortune 500 CEOs. A better evaluation is to talk to folks like Kayla, my Lyft driver in Morgantown, WV, and find out what they think about AI. It's a test I stumbled upon while traveling from an AI event at the West Virginia University College of Law to one at Stanford Law.

Kayla asked me what I do for a living. I told her that I’m a law professor focused on AI policy. Those were the last words I said for the remainder of the ride to the airport.

Keep ReadingShow less
Close up of a person on their phone at night.

From “Patriot Games” to The Hunger Games, how spectacle, social media, and political culture risk normalizing violence and eroding empathy.

Getty Images, Westend61

The Capitol Is Counting on Us to Laugh

When the Trump administration announced the Patriot Games, many people laughed. Selecting two children per state for a nationally televised sports competition looked too much like Suzanne Collins’ Hunger Games to take seriously. But that instinct, to laugh rather than look closer, is one the Capitol is counting on. It has always been easier to normalize violence when it arrives dressed as entertainment or patriotism.

Here’s what I mean: The Hunger Games starts with the reaping, the moment when a Capitol official selects two children, one boy and one girl, to fight to the death against tributes from every other district. The games were created as an annual reminder of a failed rebellion, to remind the districts that dissent has consequences. At first, many Capitol residents saw the games as a just punishment. But sentiments shifted as the spectacle grew—when citizens could bet on winners, when a death march transformed into a beauty pageant, when murder became a pathway to celebrity.

Keep ReadingShow less
Technology and Presidential Election

Anthropic’s Mythos AI raises alarms about surveillance, deepfakes, and democracy. Why urgent AI regulation is needed as U.S. policy struggles to keep pace.

Getty Images, Douglas Rissing

How the Latest in AI Threatens Democracy

On April 24, America got a wake-up call from Anthropic, one of the nation’s leading artificial intelligence companies. It announced a new AI tool, called Mythos, that can identify flaws in computer networks and software systems that, as Politico puts it, “Even the brightest human minds have been unable to identify.”

A machine smarter than the “brightest human minds” sounds like a line from a dystopian science fiction movie. And if that weren’t scary enough, we now have a government populated by people who seem oblivious to the risks AI poses to democracy and humanity itself.

Keep ReadingShow less
Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate
the letters are made up of different colors

Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate

This nonpartisan policy brief, written by an ACE fellow, is republished by The Fulcrum as part of our partnership with the Alliance for Civic Engagement and our NextGen initiative — elevating student voices, strengthening civic education, and helping readers better understand democracy and public policy.

Key takeaways

  • The U.S. has no national AI liability law. Instead, a patchwork of state laws has emerged which has resulted in legal protections being dependent on where an individual resides.
  • It’s often unclear who is legally responsible when AI causes harm. This gap leaves many people with no clear path to seek help.
  • In March 2026, the White House and Congress introduced major proposals to establish a federal standard, but there is significant disagreement about whether that standard should prioritize protecting innovation or protecting people harmed by AI systems.

Background: A Patchwork of State Laws

Without a national AI law, states have been filling in the gaps on their own. The result is an uneven landscape where a person’s legal protections depend entirely on which state they live in.

Keep ReadingShow less