Skip to content
Search

Latest Stories

Follow Us:
Top Stories

What happens when voters cede their ballots to AI agents?

Robotic hand holding a ballot
Alfieri/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University. Starting this summer, he will serve as a Tarbell fellow.

With the supposed goal of diversifying the electorate and achieving more representative results, State Y introduces “VoteGPT.” This artificial intelligence agent studies your social media profiles, your tax returns and your streaming accounts to develop a “CivicU.” This artificial clone would use that information to serve as your democratic proxy.


When an election rolls around, State Y grants you the option of having your CivicU fill in the ballot on your behalf — there’s no need to study the issues, learn about the candidates or even pick up a pencil. Surely CivicU will vote in your best interest. In fact, it may even vote “better” than you would, based on its objective consideration of which candidates and ballot measures would improve your well-being.

In the first election with CivicU, there’s nearly 90 percent “voter” turnout with AI agents casting about a third of all votes. Soon after Election Day, a losing candidate for a seat in the House of Representatives challenges the constitutionality of votes cast by CivicU.

A robust legal debate ensues. State Y points to its constitutional authority to decide the manner of elections and notes the absence of any federal law banning AI agents in an electoral context. What’s more, State Y reviews historical records that make clear that proxy voting — appointing someone to vote on your behalf — was a relatively common practice in colonial America. The candidate counters that surely the Founders could not have anticipated and would not have tolerated a vote cast without active involvement by the voter in question. They also note that proxy voting, while permissible in some countries like the United Kingdom, has not been adopted by the United States. The developers of VoteGPT file an amicus brief arguing that a voter’s CivicU is indistinguishable from the voter — they are one and the same, so this is more akin to someone Googling how they should vote than someone delegating their voting power.

Who wins and why?

This may seem like an implausible scenario, but the rapid development of AI as well as its use in electoral contexts suggests otherwise. In fact, current trends indicate that AI will only come to play a larger role in whether and how people participate in democracy. In short, it is a matter of when and not if certain partisan interests will leverage AI agents to bolster the odds of their electoral success.

Though proponents of AI agents might claim such efforts reflect democratic ideals such as a more representative electorate, excessive use of such agents (like allowing them to cast votes on behalf of users) may actually cause the opposite result — decreasing the legitimacy of our elections and sowing distrust in our institutions.

Before CivicU or something like it becomes a reality, we need to proactively clarify what limits exist in the Constitution with respect to agentic voting. My own interpretation is that the Constitution prohibits the tallying of any vote not explicitly cast by a human. Though such a finding may seem obvious to some, it is important to stress that a human must always be “in the loop” when it comes to formal democratic activities. The development of any alternative norm — i.e. allowing AI agents to serve as our proxies in government affairs — promises to undermine our democratic autonomy and stability.


Read More

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal
Getty Images, Kmatta

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal

Background

The Health Insurance Portability and Accountability Act (HIPAA) was enacted in 1996 to protect sensitive health information from being disclosed without patients’ consent. Under this act, a patient’s privacy is safeguarded through the enforcement of strict standards on managing, transmitting, and storing health information.

Keep ReadingShow less
Two people looking at screens.

A case for optimism, risk-taking, and policy experimentation in the age of AI—and why pessimism threatens technological progress.

Getty Images, Andriy Onufriyenko

In Defense of AI Optimism

Society needs people to take risks. Entrepreneurs who bet on themselves create new jobs. Institutions that gamble with new processes find out best to integrate advances into modern life. Regulators who accept potential backlash by launching policy experiments give us a chance to devise laws that are based on evidence, not fear.

The need for risk taking is all the more important when society is presented with new technologies. When new tech arrives on the scene, defense of the status quo is the easier path--individually, institutionally, and societally. We are all predisposed to think that the calamities, ailments, and flaws we experience today--as bad as they may be--are preferable to the unknowns tied to tomorrow.

Keep ReadingShow less
Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less