Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Should States Regulate AI?

News

Should States Regulate AI?

Rep. Jay Obernolte, R-CA, speaks at an AI conference on Capitol Hill with experts

Provided

WASHINGTON —- As House Republicans voted Thursday to pass a 10-year moratorium on AI regulation by states, Rep. Jay Obernolte, R-CA, and AI experts said the measure would be necessary to ensure US dominance in the industry.

“We want to make sure that AI continues to be led by the United States of America, and we want to make sure that our economy and our society realizes the potential benefits of AI deployment,” Obernolte said.


As Artificial Intelligence has the potential to revolutionize many aspects of society, federal and state leaders clash over the states’ ability to regulate it on their own.

According to data from the National Conference of State Legislators, legislation to regulate AI had already been introduced in 48 states. In 2024 alone, nearly 700 such bills were introduced, and 75 were adopted or enacted.

40 state attorneys general have co-signed a letter to Congress, urging them not to pass this measure.

“This bill does not propose any regulatory scheme to replace or supplement the laws enacted or currently under consideration by the states, leaving Americans entirely unprotected from the potential harms of AI,” the letter states.

However, Obernolte said leaving AI regulation up to the individual states could create a series of complex and confusing rules that make it difficult for innovators to operate.

“We risk creating this very balkanized regulatory landscape of potentially 50 different state regulations going in 50 different, and in some cases wildly different directions,” Obernolte said during an event Thursday on Capitol Hill. “It would be a barrier to entry for everybody.”

The moratorium bill now awaits a vote in the Senate. It faces widespread opposition, mostly from Democrats but also some Republicans, who argue that it leaves Americans without safeguards from AI.

“We need those protections, and until we pass something that is federally preemptive, we can't call for a moratorium on those things,” said Sen. Marsha Blackburn, R-TN, at a Senate hearing on Wednesday.

However, Obernolte addressed some of these concerns by pointing out that agencies already regulate AI in various ways. For example, he said that the Food and Drug Administration has already issued over 1,000 permits for the use of AI in medical devices.

Logan Kolas, the director of tech policy at the American Consumer Institute, said that part of the problem with states jumping to regulate AI without careful consideration is that the technology is so new that we do not understand the real problems.

“There's a lot of things we don't know, and that does require a bit of humility. As these provable harms come up, those are the things that we absolutely 100% should be addressing, but, trying to anticipate them, to think of the millions of possibilities of what could go wrong, is just unrealistic and not the way that we have done successful policy in the past,” said Kolas.

Perry Metzger, the chairman of the board of Alliance for the Future, a non-profit dedicated to helping lessen fears of AI, echoed Kolas’s claims and said that regulations on AI as a whole would be counterproductive because AI is merely a tool to accomplish things. He said the dangers of AI technology lie in how people use it, not the technology itself.

“We have a tradition [in this country] that I think is very important. That is, not blaming manufacturers for egregious and knowing misuses of their tools. We do not say that the Ford Motor Company is liable whenever someone uses an F-150 in a bank robbery. We have a feeling in our country that the people who choose to rob banks are responsible for that sort of misuse,” said Metzger.

Athan Yanos is a graduate student at Northwestern Medill in the Politics, Policy and Foreign Affairs specialization. He is a New York native. Prior to Medill, he graduated with an M.A. in Philosophy and Politics from the University of Edinburgh. He also hosts his own podcast dedicated to philosophy and international politics.

To read more of Athan's work, click HERE.

The Fulcrum is committed to nurturing the next generation of journalists. Learn how by clicking HERE.


Read More

President Trump Should Put America’s AI Interests First
A close up of a blue eyeball in the dark
Photo by Luke Jones on Unsplash

President Trump Should Put America’s AI Interests First

In some ways, the second Trump presidency has been as expected–from border security to reducing the size and scope of the federal government.

In other ways, the president has not delivered on a key promise to the MAGA base. Rather than waging a war against Silicon Valley’s influence in American politics, the administration has, by and large, done what Big Tech wants–despite its long history of anti-Trumpism in the most liberal corners of San Francisco. Not only are federal agencies working in sync with Amazon, OpenAI, and Palantir, but the president has carved out key alliances with Mark Zuckerberg, Jensen Huang, and other AI evangelists to promote AI dominance at all costs.

Keep ReadingShow less
medical expenses

"The promise of AI-powered tools—from personalized health monitoring to adaptive educational support—depends on access to quality data," writes Kevin Frazier.

Prapass Pulsub/Getty Images

Your Data, Your Choice: Why Americans Need the Right to Share

Outdated, albeit well-intentioned data privacy laws create the risk that many Americans will miss out on proven ways in which AI can improve their quality of life. Thanks to advances in AI, we possess incredible opportunities to use our personal information to aid the development of new tools that can lead to better health care, education, and economic advancement. Yet, HIPAA (the Health Information Portability and Accountability Act), FERPA (The Family Educational Rights and Privacy Act), and a smattering of other state and federal laws complicate the ability of Americans to do just that.

The result is a system that claims to protect our privacy interests while actually denying us meaningful control over our data and, by extension, our well-being in the Digital Age.

Keep ReadingShow less
New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal
Getty Images, Kmatta

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal

Background

The Health Insurance Portability and Accountability Act (HIPAA) was enacted in 1996 to protect sensitive health information from being disclosed without patients’ consent. Under this act, a patient’s privacy is safeguarded through the enforcement of strict standards on managing, transmitting, and storing health information.

Keep ReadingShow less
Two people looking at screens.

A case for optimism, risk-taking, and policy experimentation in the age of AI—and why pessimism threatens technological progress.

Getty Images, Andriy Onufriyenko

In Defense of AI Optimism

Society needs people to take risks. Entrepreneurs who bet on themselves create new jobs. Institutions that gamble with new processes find out best to integrate advances into modern life. Regulators who accept potential backlash by launching policy experiments give us a chance to devise laws that are based on evidence, not fear.

The need for risk taking is all the more important when society is presented with new technologies. When new tech arrives on the scene, defense of the status quo is the easier path--individually, institutionally, and societally. We are all predisposed to think that the calamities, ailments, and flaws we experience today--as bad as they may be--are preferable to the unknowns tied to tomorrow.

Keep ReadingShow less