Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Should States Regulate AI?

News

Should States Regulate AI?

Rep. Jay Obernolte, R-CA, speaks at an AI conference on Capitol Hill with experts

Provided

WASHINGTON —- As House Republicans voted Thursday to pass a 10-year moratorium on AI regulation by states, Rep. Jay Obernolte, R-CA, and AI experts said the measure would be necessary to ensure US dominance in the industry.

“We want to make sure that AI continues to be led by the United States of America, and we want to make sure that our economy and our society realizes the potential benefits of AI deployment,” Obernolte said.


As Artificial Intelligence has the potential to revolutionize many aspects of society, federal and state leaders clash over the states’ ability to regulate it on their own.

According to data from the National Conference of State Legislators, legislation to regulate AI had already been introduced in 48 states. In 2024 alone, nearly 700 such bills were introduced, and 75 were adopted or enacted.

40 state attorneys general have co-signed a letter to Congress, urging them not to pass this measure.

“This bill does not propose any regulatory scheme to replace or supplement the laws enacted or currently under consideration by the states, leaving Americans entirely unprotected from the potential harms of AI,” the letter states.

However, Obernolte said leaving AI regulation up to the individual states could create a series of complex and confusing rules that make it difficult for innovators to operate.

“We risk creating this very balkanized regulatory landscape of potentially 50 different state regulations going in 50 different, and in some cases wildly different directions,” Obernolte said during an event Thursday on Capitol Hill. “It would be a barrier to entry for everybody.”

The moratorium bill now awaits a vote in the Senate. It faces widespread opposition, mostly from Democrats but also some Republicans, who argue that it leaves Americans without safeguards from AI.

“We need those protections, and until we pass something that is federally preemptive, we can't call for a moratorium on those things,” said Sen. Marsha Blackburn, R-TN, at a Senate hearing on Wednesday.

However, Obernolte addressed some of these concerns by pointing out that agencies already regulate AI in various ways. For example, he said that the Food and Drug Administration has already issued over 1,000 permits for the use of AI in medical devices.

Logan Kolas, the director of tech policy at the American Consumer Institute, said that part of the problem with states jumping to regulate AI without careful consideration is that the technology is so new that we do not understand the real problems.

“There's a lot of things we don't know, and that does require a bit of humility. As these provable harms come up, those are the things that we absolutely 100% should be addressing, but, trying to anticipate them, to think of the millions of possibilities of what could go wrong, is just unrealistic and not the way that we have done successful policy in the past,” said Kolas.

Perry Metzger, the chairman of the board of Alliance for the Future, a non-profit dedicated to helping lessen fears of AI, echoed Kolas’s claims and said that regulations on AI as a whole would be counterproductive because AI is merely a tool to accomplish things. He said the dangers of AI technology lie in how people use it, not the technology itself.

“We have a tradition [in this country] that I think is very important. That is, not blaming manufacturers for egregious and knowing misuses of their tools. We do not say that the Ford Motor Company is liable whenever someone uses an F-150 in a bank robbery. We have a feeling in our country that the people who choose to rob banks are responsible for that sort of misuse,” said Metzger.

Athan Yanos is a graduate student at Northwestern Medill in the Politics, Policy and Foreign Affairs specialization. He is a New York native. Prior to Medill, he graduated with an M.A. in Philosophy and Politics from the University of Edinburgh. He also hosts his own podcast dedicated to philosophy and international politics.

To read more of Athan's work, click HERE.

The Fulcrum is committed to nurturing the next generation of journalists. Learn how by clicking HERE.


Read More

An illustration of a person standing alone on a platform and looking at speech bubbles.

A bold critique of modern democracy and rising authoritarian ideas, exploring how AI-powered swarm digital democracy could redefine participation and governance.

Getty Images, Andriy Onufriyenko

The Only Radical Move Forward: Swarm Digital Democracy

We are increasingly told that democracy has failed and that its time has passed. The evidence proffered is everywhere, we are told: Gridlock, captured institutions, performative elections, a public that senses, correctly, that its voice rarely translates into real power. Into this vacuum step dystopic movements like the Dark Enlightenment and harder strains of Right-wing populism, offering a stark diagnosis and an even starker cure: Abandon the illusion of popular rule and return to forms of authority that are decisive, hierarchical, and unapologetically exclusionary. They present themselves as bold, clear-eyed, rambunctious, alive, and willing to act where others hesitate. And all to save the world from itself.

But this framing depends on a sleight of hand: It assumes that what we have been living under is, in fact, democracy, and that its failures are the failures of democracy itself. That is the first mistake.

Keep ReadingShow less
An illustration of orange-colored megaphones, one megaphone in the middle is red and facing the opposite direction of the others.

A growing crisis threatens U.S. public data. Experts warn disappearing federal datasets could undermine science, policy, and democracy—and outline a plan to protect them.

Getty Images, Richard Drury

America's Data Crisis: Saving Trusted Facts Is Essential to Democracy

In March 2026, more than a hundred information and data experts gathered in a converted Christian Science church to confront a problem most Americans never see, but that shapes nearly every public debate we have. The nonprofit Internet Archive convened this national Information Stewardship Forum at their San Francisco headquarters because something fundamental is breaking: the country’s shared foundation of facts.

For decades, the United States has relied on a vast ecosystem of federal data on health, climate, the economy, education, demographics, scientific research, and more. This data is the backbone of journalism, policymaking, scientific discovery, and public accountability. It is how we know whether the air is safe to breathe, whether unemployment is rising or falling, whether a new disease is spreading, or whether a community is being left behind.

Keep ReadingShow less
Man lying in his bed, on his phone at night.

As the 2026 election approaches, doomscrolling and social media are shaping voter behavior through fear and anxiety. Learn how digital news consumption influences political decisions—and how to break the cycle for more informed voting.

Getty Images, gorodenkoff

Americans Are Doomscrolling Their Way to the Ballot Box and Only Getting Empty Promises

As the spring primary cycle ramps up, voters are deciding which candidates to elect in the November general election, but too much doomscrolling on social media is leading to uninformed — and often anxiety-based — voting. Even though online platforms and politicians may be preying on our exhaustion to further their agendas, we don’t have to fall for it this election cycle.

Doomscrolling is, unfortunately, part of daily life for many of us. It involves consuming a virtually endless amount of negative social media posts and news content, causing us to feel scared and depressed. Our brains have a hardwired negativity bias that causes us to notice potential threats and focus on them. This is exacerbated by the fact that people who closely follow or participate in politics are more likely to doomscroll.

Keep ReadingShow less
The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less