Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The U.S. election system was already wobbling, and now here comes AI

people voting

In some states, more than half of election officials have quit, writes Klug.

Brett Deering/Getty Images

Klug served in the House of Representatives from 1991 to 1999. He hosts the political podcast “ Lost in the Middle: America’s Political Orphans.”

As we head into election season, the potential for misinformation is enormous and the ability of election officials to respond to artificial intelligence is limited.

The new technology arrives at a time when we still haven’t gotten our arms around social media threats.


“The ability to react at the pace that's being developed is almost impossible,” worries Idaho Secretary of State Phil McGrane, a Republican. “By design, our system is meant to be slow and methodical.”

While deep fakes get all the attention, the truth is the threat arrives at a time when election administration itself is shaky. Election officials caught the brunt of the mistrust. In some states more than half of them have quit.

“I have a little PTSD, as do my coworkers,” said Nick Lima, who heads up elections in Cranston, R.I., and who — with some reservations — decided to keep the job he loves. “During election season, you know, you really feel the pressure, you feel your heartbeat increasing a bit.”

Today everyone who works in the campaign infrastructure faces unending scrutiny. If you thought it was easier in a red state, you are mistaken.

“Just the act of standing behind you watching you work just puts you on edge,” said McGrane. “Now you start second-guessing yourself, even if you know you're doing it right. The poll workers don't know about cyber security on voting equipment, but your poll watcher is getting asked these questions. “

Deep fakes are one level of concern, but Edward Perez, who had been director of civic integrity at Twitter and is now a board member at the OSET Institute (whose mission is to re-build public confidence in our voting system), worries about the misuse of AI to disrupt the backroom of every American precinct.

“One of the most important things to understand about election administration is, it’s very, very process oriented. And there’s a tremendous number of layers,” he said. “Are we talking about voter registration? About the security of election administration? All of this technology is never deployed just in a vacuum.”

The fact that the election system is a conglomeration of different rules and regulations from 50 different states with 50 different voting rules adds to the complexity. The challenge is serious as election officials scramble in this election-denying climate to staff 132,000 polling places with 775,000 volunteers. The clock is ticking to deploy the necessary defenses against threats that aren’t fully understood.

From hanging chads to deep fake videos, American democracy wobbles by Scott Klug

Read on Substack

Read More

Two people looking at screens.

A case for optimism, risk-taking, and policy experimentation in the age of AI—and why pessimism threatens technological progress.

Getty Images, Andriy Onufriyenko

In Defense of AI Optimism

Society needs people to take risks. Entrepreneurs who bet on themselves create new jobs. Institutions that gamble with new processes find out best to integrate advances into modern life. Regulators who accept potential backlash by launching policy experiments give us a chance to devise laws that are based on evidence, not fear.

The need for risk taking is all the more important when society is presented with new technologies. When new tech arrives on the scene, defense of the status quo is the easier path--individually, institutionally, and societally. We are all predisposed to think that the calamities, ailments, and flaws we experience today--as bad as they may be--are preferable to the unknowns tied to tomorrow.

Keep ReadingShow less
Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less
Meta Undermining Trust but Verify through Paid Links
Facebook launches voting resource tool
Facebook launches voting resource tool

Meta Undermining Trust but Verify through Paid Links

Facebook is testing limits on shared external links, which would become a paid feature through their Meta Verified program, which costs $14.99 per month.

This change solidifies that verification badges are now meaningless signifiers. Yet it wasn’t always so; the verified internet was built to support participation and trust. Beginning with Twitter’s verification program launched in 2009, a checkmark next to a username indicated that an account had been verified to represent a notable person or official account for a business. We could believe that an elected official or a brand name was who they said they were online. When Twitter Blue, and later X Premium, began to support paid blue checkmarks in November of 2022, the visual identification of verification became deceptive. Think Fake Eli Lilly accounts posting about free insulin and impersonation accounts for Elon Musk himself.

This week’s move by Meta echoes changes at Twitter/X, despite the significant evidence that it leaves information quality and user experience in a worse place than before. Despite what Facebook says, all this tells anyone is that you paid.

Keep ReadingShow less