Skip to content
Search

Latest Stories

Top Stories

Should States Regulate AI?

Should States Regulate AI?

Rep. Jay Obernolte, R-CA, speaks at an AI conference on Capitol Hill with experts

Provided

WASHINGTON —- As House Republicans voted Thursday to pass a 10-year moratorium on AI regulation by states, Rep. Jay Obernolte, R-CA, and AI experts said the measure would be necessary to ensure US dominance in the industry.

“We want to make sure that AI continues to be led by the United States of America, and we want to make sure that our economy and our society realizes the potential benefits of AI deployment,” Obernolte said.


As Artificial Intelligence has the potential to revolutionize many aspects of society, federal and state leaders clash over the states’ ability to regulate it on their own.

According to data from the National Conference of State Legislators, legislation to regulate AI had already been introduced in 48 states. In 2024 alone, nearly 700 such bills were introduced, and 75 were adopted or enacted.

40 state attorneys general have co-signed a letter to Congress, urging them not to pass this measure.

“This bill does not propose any regulatory scheme to replace or supplement the laws enacted or currently under consideration by the states, leaving Americans entirely unprotected from the potential harms of AI,” the letter states.

However, Obernolte said leaving AI regulation up to the individual states could create a series of complex and confusing rules that make it difficult for innovators to operate.

Sign up for The Fulcrum newsletter

“We risk creating this very balkanized regulatory landscape of potentially 50 different state regulations going in 50 different, and in some cases wildly different directions,” Obernolte said during an event Thursday on Capitol Hill. “It would be a barrier to entry for everybody.”

The moratorium bill now awaits a vote in the Senate. It faces widespread opposition, mostly from Democrats but also some Republicans, who argue that it leaves Americans without safeguards from AI.

“We need those protections, and until we pass something that is federally preemptive, we can't call for a moratorium on those things,” said Sen. Marsha Blackburn, R-TN, at a Senate hearing on Wednesday.

However, Obernolte addressed some of these concerns by pointing out that agencies already regulate AI in various ways. For example, he said that the Food and Drug Administration has already issued over 1,000 permits for the use of AI in medical devices.

Logan Kolas, the director of tech policy at the American Consumer Institute, said that part of the problem with states jumping to regulate AI without careful consideration is that the technology is so new that we do not understand the real problems.

“There's a lot of things we don't know, and that does require a bit of humility. As these provable harms come up, those are the things that we absolutely 100% should be addressing, but, trying to anticipate them, to think of the millions of possibilities of what could go wrong, is just unrealistic and not the way that we have done successful policy in the past,” said Kolas.

Perry Metzger, the chairman of the board of Alliance for the Future, a non-profit dedicated to helping lessen fears of AI, echoed Kolas’s claims and said that regulations on AI as a whole would be counterproductive because AI is merely a tool to accomplish things. He said the dangers of AI technology lie in how people use it, not the technology itself.

“We have a tradition [in this country] that I think is very important. That is, not blaming manufacturers for egregious and knowing misuses of their tools. We do not say that the Ford Motor Company is liable whenever someone uses an F-150 in a bank robbery. We have a feeling in our country that the people who choose to rob banks are responsible for that sort of misuse,” said Metzger.

Athan Yanos is a graduate student at Northwestern Medill in the Politics, Policy and Foreign Affairs specialization. He is a New York native. Prior to Medill, he graduated with an M.A. in Philosophy and Politics from the University of Edinburgh. He also hosts his own podcast dedicated to philosophy and international politics.

To read more of Athan's work, click HERE.

The Fulcrum is committed to nurturing the next generation of journalists. Learn how by clicking HERE.

Read More

The AI Race We Need: For a Better Future, Not Against Another Nation

The concept of AI hovering among the public.

Getty Images, J Studios

The AI Race We Need: For a Better Future, Not Against Another Nation

The AI race that warrants the lion’s share of our attention and resources is not the one with China. Both superpowers should stop hurriedly pursuing AI advances for the sake of “beating” the other. We’ve seen such a race before. Both participants lose. The real race is against an unacceptable status quo: declining lifespans, increasing income inequality, intensifying climate chaos, and destabilizing politics. That status quo will drag on, absent the sorts of drastic improvements AI can bring about. AI may not solve those problems but it may accelerate our ability to improve collective well-being. That’s a race worth winning.

Geopolitical races have long sapped the U.S. of realizing a better future sooner. The U.S. squandered scarce resources and diverted talented staff to close the alleged missile gap with the USSR. President Dwight D. Eisenhower rightfully noted, “Every gun that is made, every warship launched, every rocket fired signifies, in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed.” He realized that every race comes at an immense cost. In this case, the country was “spending the sweat of its laborers, the genius of its scientists, the hopes of its children.”

Keep ReadingShow less
Closeup of Software engineering team engaged in problem-solving and code analysis

Closeup of Software engineering team engaged in problem-solving and code analysis.

Getty Images, MTStock Studio

AI Is Here. Our Laws Are Stuck in the Past.

Artificial intelligence (AI) promises a future once confined to science fiction: personalized medicine accounting for your specific condition, accelerated scientific discovery addressing the most difficult challenges, and reimagined public education designed around AI tutors suited to each student's learning style. We see glimpses of this potential on a daily basis. Yet, as AI capabilities surge forward at exponential speed, the laws and regulations meant to guide them remain anchored in the twentieth century (if not the nineteenth or eighteenth!). This isn't just inefficient; it's dangerously reckless.

For too long, our approach to governing new technologies, including AI, has been one of cautious incrementalism—trying to fit revolutionary tools into outdated frameworks. We debate how century-old privacy torts apply to vast AI training datasets, how liability rules designed for factory machines might cover autonomous systems, or how copyright law conceived for human authors handles AI-generated creations. We tinker around the edges, applying digital patches to analog laws.

Keep ReadingShow less
Nurturing the Next Generation of Journalists
man using MacBook Air

Nurturing the Next Generation of Journalists

“Student journalists are uniquely positioned to take on the challenges of complicating the narrative about how we see each other, putting forward new solutions to how we can work together and have dialogue across difference,” said Maxine Rich, the Program Manager with Common Ground USA. I had the chance to interview her earlier this year about Common Ground Journalism, a new initiative to support students reporting in contentious times.

A partnership with The Fulcrum and the Latino News Network (LNN), I joined Maxine and Nicole Donelan, Program Assistant with Common Ground USA, as co-instructor of the first Common Ground Journalism cohort, which ran for six weeks between January and March 2025.

Keep ReadingShow less
Project 2025’s Media Agenda: The Executive Order Threatens NPR and PBS
NPR headquarters | James Cridland | Flickr

Project 2025’s Media Agenda: The Executive Order Threatens NPR and PBS

President Donald Trump signed an executive order late Thursday evening to eliminate federal funding for NPR and PBS. The order directs the Corporation for Public Broadcasting (CPB) and other agencies to cease both direct and indirect public financing for these public broadcasters.

In a social media post, the administration defended the decision, asserting that NPR and PBS "receive millions from taxpayers to spread radical, woke propaganda disguised as 'news.’" The executive order argues that government-funded media is outdated and unnecessary, claiming it compromises journalistic independence.

Keep ReadingShow less