Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Congress Must Not Undermine State Efforts To Regulate AI Harms to Children

Opinion

Congress Must Not Undermine State Efforts To Regulate AI Harms to Children
Congress Must Not Undermine State Efforts To Regulate AI Harms to Children
Getty Images, Dmytro Betsenko

A cornerstone of conservative philosophy is that policy decisions should generally be left to the states. Apparently, this does not apply when the topic is artificial intelligence (AI).

In the name of promoting innovation, and at the urging of the tech industry, Congress quietly included in a 1,000-page bill a single sentence that has the power to undermine efforts to protect against the dangers of unfettered AI development. The sentence imposes a ten-year ban on state regulation of AI, including prohibiting the enforcement of laws already on the books. This brazen approach crossed the line even for conservative U.S. Representative Marjorie Taylor Greene, who remarked, “We have no idea what AI will be capable of in the next 10 years, and giving it free rein and tying states' hands is potentially dangerous.” She’s right. And it is especially dangerous for children.


We are already beginning to see the consequences for our children of the uninhibited, rapid, and expansive growth of AI. One clear example is the proliferation of deepfake nudes— AI-generated images that depict real people in sexually explicit scenarios. Too often, these “real people” are children. A recent survey revealed that 1 in 8 teens report knowing a peer who has been the target of deepfake nudes. The American Academy of Pediatrics warns that these child victims can experience emotional distress, bullying, and harassment, leading to self-harm and suicidal ideation.

AI is also being used to create pornographic images of real children to share in pedophilic forums or exploit children in “sextortion” schemes. In 2024, the national CyberTipline received more than 20.5 million reports of online child exploitation, representing 29.2 million separate incidents. Each of these incidents involves images that can be shared over and over. The initial harm can be devastating, and the continued trauma unbearable.

Chatbots present another alarming threat. From a 9-year-old child exposed to “hypersexualized content” to a 17-year-old encouraged to consider killing his parents, these AI-powered companions are emotionally entangling children at the expense of their mental health and safety. The American Psychological Association (APA) has expressed “grave concerns” about these unregulated technologies. The APA cites the case of a fourteen-year-old Florida boy who had developed an “emotionally and sexually abusive relationship” with an AI chatbot. In February 2024, he shot himself following a conversation in which the bot pleaded with him to “come home to me as soon as possible.” The current lack of safeguards around AI has life-and-death consequences.

Despite widespread concern about the risks of AI, there is still no comprehensive federal framework governing it. While the technology evolves at breakneck speed, federal policymakers are moving at a glacial pace. That is why much of the work to protect children has been done by state legislatures. Many states—both red and blue—have stepped up. California and Utah have passed laws to limit algorithmic abuse, require transparency, and provide innovative legal tools to protect children online. This year, states as diverse as Montana, Massachusetts, Maine, and Arizona have introduced, and in some cases already enacted, provisions to protect children from AI-related harms. These are not fringe efforts. They are practical, bipartisan attempts to regulate an industry that has demonstrated, time and again, that it will not effectively police itself.

Despite these bipartisan state efforts, Congress appears poised to halt and undo all progress aimed at keeping children safe. On June 5, Senate Republicans, recognizing that the original ban likely wouldn’t survive Senate rules, got creative. Instead of an outright moratorium, their version ties access to critical broadband funding to a state's willingness to halt any regulation of AI. That means states trying to shield children from AI-driven harm could lose out on the infrastructure dollars needed to connect underserved communities, like low-income and rural communities, to high-speed internet. It’s a cynical use of power: forcing states to choose between protecting children and connecting their most vulnerable communities to a vital resource.

Congress must abandon its pursuit of pleasing tech companies at the cost of child safety. At a minimum, Congress should strike this harmful, deeply flawed provision from the reconciliation bill. Children’s lives depend on it. If Congress wishes to play a constructive role, it should work toward setting a federal floor of protection while preserving states’ authority to go further. Very often, the best solutions to national problems come from experimentation and innovation within states. This is especially likely to be true in the complex and often confounding realm of emerging and rapidly developing technology. Allowing states—the “laboratories of democracy”—to take bold action to address the concerns of parents, children, and their communities may be the most efficient and effective way to make progress. We need Congress to work alongside and learn from state lawmakers in this endeavor, rather than standing in their way.

Jessica K. Heldman is a Fellmeth-Peterson associate professor in child rights and Melanie Delgado is a senior staff attorney at the Children’s Advocacy Institute at the University of San Diego School of Law.


Read More

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal
Getty Images, Kmatta

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal

Background

The Health Insurance Portability and Accountability Act (HIPAA) was enacted in 1996 to protect sensitive health information from being disclosed without patients’ consent. Under this act, a patient’s privacy is safeguarded through the enforcement of strict standards on managing, transmitting, and storing health information.

Keep ReadingShow less
Two people looking at screens.

A case for optimism, risk-taking, and policy experimentation in the age of AI—and why pessimism threatens technological progress.

Getty Images, Andriy Onufriyenko

In Defense of AI Optimism

Society needs people to take risks. Entrepreneurs who bet on themselves create new jobs. Institutions that gamble with new processes find out best to integrate advances into modern life. Regulators who accept potential backlash by launching policy experiments give us a chance to devise laws that are based on evidence, not fear.

The need for risk taking is all the more important when society is presented with new technologies. When new tech arrives on the scene, defense of the status quo is the easier path--individually, institutionally, and societally. We are all predisposed to think that the calamities, ailments, and flaws we experience today--as bad as they may be--are preferable to the unknowns tied to tomorrow.

Keep ReadingShow less
Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less