Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Should States Regulate AI?

News

Should States Regulate AI?

Rep. Jay Obernolte, R-CA, speaks at an AI conference on Capitol Hill with experts

Provided

WASHINGTON —- As House Republicans voted Thursday to pass a 10-year moratorium on AI regulation by states, Rep. Jay Obernolte, R-CA, and AI experts said the measure would be necessary to ensure US dominance in the industry.

“We want to make sure that AI continues to be led by the United States of America, and we want to make sure that our economy and our society realizes the potential benefits of AI deployment,” Obernolte said.


As Artificial Intelligence has the potential to revolutionize many aspects of society, federal and state leaders clash over the states’ ability to regulate it on their own.

According to data from the National Conference of State Legislators, legislation to regulate AI had already been introduced in 48 states. In 2024 alone, nearly 700 such bills were introduced, and 75 were adopted or enacted.

40 state attorneys general have co-signed a letter to Congress, urging them not to pass this measure.

“This bill does not propose any regulatory scheme to replace or supplement the laws enacted or currently under consideration by the states, leaving Americans entirely unprotected from the potential harms of AI,” the letter states.

However, Obernolte said leaving AI regulation up to the individual states could create a series of complex and confusing rules that make it difficult for innovators to operate.

“We risk creating this very balkanized regulatory landscape of potentially 50 different state regulations going in 50 different, and in some cases wildly different directions,” Obernolte said during an event Thursday on Capitol Hill. “It would be a barrier to entry for everybody.”

The moratorium bill now awaits a vote in the Senate. It faces widespread opposition, mostly from Democrats but also some Republicans, who argue that it leaves Americans without safeguards from AI.

“We need those protections, and until we pass something that is federally preemptive, we can't call for a moratorium on those things,” said Sen. Marsha Blackburn, R-TN, at a Senate hearing on Wednesday.

However, Obernolte addressed some of these concerns by pointing out that agencies already regulate AI in various ways. For example, he said that the Food and Drug Administration has already issued over 1,000 permits for the use of AI in medical devices.

Logan Kolas, the director of tech policy at the American Consumer Institute, said that part of the problem with states jumping to regulate AI without careful consideration is that the technology is so new that we do not understand the real problems.

“There's a lot of things we don't know, and that does require a bit of humility. As these provable harms come up, those are the things that we absolutely 100% should be addressing, but, trying to anticipate them, to think of the millions of possibilities of what could go wrong, is just unrealistic and not the way that we have done successful policy in the past,” said Kolas.

Perry Metzger, the chairman of the board of Alliance for the Future, a non-profit dedicated to helping lessen fears of AI, echoed Kolas’s claims and said that regulations on AI as a whole would be counterproductive because AI is merely a tool to accomplish things. He said the dangers of AI technology lie in how people use it, not the technology itself.

“We have a tradition [in this country] that I think is very important. That is, not blaming manufacturers for egregious and knowing misuses of their tools. We do not say that the Ford Motor Company is liable whenever someone uses an F-150 in a bank robbery. We have a feeling in our country that the people who choose to rob banks are responsible for that sort of misuse,” said Metzger.

Athan Yanos is a graduate student at Northwestern Medill in the Politics, Policy and Foreign Affairs specialization. He is a New York native. Prior to Medill, he graduated with an M.A. in Philosophy and Politics from the University of Edinburgh. He also hosts his own podcast dedicated to philosophy and international politics.

To read more of Athan's work, click HERE.

The Fulcrum is committed to nurturing the next generation of journalists. Learn how by clicking HERE.


Read More

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less
Meta Undermining Trust but Verify through Paid Links
Facebook launches voting resource tool
Facebook launches voting resource tool

Meta Undermining Trust but Verify through Paid Links

Facebook is testing limits on shared external links, which would become a paid feature through their Meta Verified program, which costs $14.99 per month.

This change solidifies that verification badges are now meaningless signifiers. Yet it wasn’t always so; the verified internet was built to support participation and trust. Beginning with Twitter’s verification program launched in 2009, a checkmark next to a username indicated that an account had been verified to represent a notable person or official account for a business. We could believe that an elected official or a brand name was who they said they were online. When Twitter Blue, and later X Premium, began to support paid blue checkmarks in November of 2022, the visual identification of verification became deceptive. Think Fake Eli Lilly accounts posting about free insulin and impersonation accounts for Elon Musk himself.

This week’s move by Meta echoes changes at Twitter/X, despite the significant evidence that it leaves information quality and user experience in a worse place than before. Despite what Facebook says, all this tells anyone is that you paid.

Keep ReadingShow less
artificial intelligence

Rather than blame AI for young Americans struggling to find work, we need to build: build new educational institutions, new retraining and upskilling programs, and, most importantly, new firms.

Surasak Suwanmake/Getty Images

Blame AI or Build With AI? Only One Approach Creates Jobs

We’re failing young Americans. Many of them are struggling to find work. Unemployment among 16- to 24-year-olds topped 10.5% in August. Even among those who do find a job, many of them are settling for lower-paying roles. More than 50% of college grads are underemployed. To make matters worse, the path forward to a more stable, lucrative career is seemingly up in the air. High school grads in their twenties find jobs at nearly the same rate as those with four-year degrees.

We have two options: blame or build. The first involves blaming AI, as if this new technology is entirely to blame for the current economic malaise facing Gen Z. This course of action involves slowing or even stopping AI adoption. For example, there’s so-called robot taxes. The thinking goes that by placing financial penalties on firms that lean into AI, there will be more roles left to Gen Z and workers in general. Then there’s the idea of banning or limiting the use of AI in hiring and firing decisions. Applicants who have struggled to find work suggest that increased use of AI may be partially at fault. Others have called for providing workers with a greater say in whether and to what extent their firm uses AI. This may help firms find ways to integrate AI in a way that augments workers rather than replace them.

Keep ReadingShow less