Skip to content
Search

Latest Stories

Top Stories

Congress Must Not Undermine State Efforts To Regulate AI Harms to Children

Congress Must Not Undermine State Efforts To Regulate AI Harms to Children
Congress Must Not Undermine State Efforts To Regulate AI Harms to Children
Getty Images, Dmytro Betsenko

A cornerstone of conservative philosophy is that policy decisions should generally be left to the states. Apparently, this does not apply when the topic is artificial intelligence (AI).

In the name of promoting innovation, and at the urging of the tech industry, Congress quietly included in a 1,000-page bill a single sentence that has the power to undermine efforts to protect against the dangers of unfettered AI development. The sentence imposes a ten-year ban on state regulation of AI, including prohibiting the enforcement of laws already on the books. This brazen approach crossed the line even for conservative U.S. Representative Marjorie Taylor Greene, who remarked, “We have no idea what AI will be capable of in the next 10 years, and giving it free rein and tying states' hands is potentially dangerous.” She’s right. And it is especially dangerous for children.


We are already beginning to see the consequences for our children of the uninhibited, rapid, and expansive growth of AI. One clear example is the proliferation of deepfake nudes— AI-generated images that depict real people in sexually explicit scenarios. Too often, these “real people” are children. A recent survey revealed that 1 in 8 teens report knowing a peer who has been the target of deepfake nudes. The American Academy of Pediatrics warns that these child victims can experience emotional distress, bullying, and harassment, leading to self-harm and suicidal ideation.

AI is also being used to create pornographic images of real children to share in pedophilic forums or exploit children in “sextortion” schemes. In 2024, the national CyberTipline received more than 20.5 million reports of online child exploitation, representing 29.2 million separate incidents. Each of these incidents involves images that can be shared over and over. The initial harm can be devastating, and the continued trauma unbearable.

Chatbots present another alarming threat. From a 9-year-old child exposed to “hypersexualized content” to a 17-year-old encouraged to consider killing his parents, these AI-powered companions are emotionally entangling children at the expense of their mental health and safety. The American Psychological Association (APA) has expressed “grave concerns” about these unregulated technologies. The APA cites the case of a fourteen-year-old Florida boy who had developed an “emotionally and sexually abusive relationship” with an AI chatbot. In February 2024, he shot himself following a conversation in which the bot pleaded with him to “come home to me as soon as possible.” The current lack of safeguards around AI has life-and-death consequences.

Despite widespread concern about the risks of AI, there is still no comprehensive federal framework governing it. While the technology evolves at breakneck speed, federal policymakers are moving at a glacial pace. That is why much of the work to protect children has been done by state legislatures. Many states—both red and blue—have stepped up. California and Utah have passed laws to limit algorithmic abuse, require transparency, and provide innovative legal tools to protect children online. This year, states as diverse as Montana, Massachusetts, Maine, and Arizona have introduced, and in some cases already enacted, provisions to protect children from AI-related harms. These are not fringe efforts. They are practical, bipartisan attempts to regulate an industry that has demonstrated, time and again, that it will not effectively police itself.

Despite these bipartisan state efforts, Congress appears poised to halt and undo all progress aimed at keeping children safe. On June 5, Senate Republicans, recognizing that the original ban likely wouldn’t survive Senate rules, got creative. Instead of an outright moratorium, their version ties access to critical broadband funding to a state's willingness to halt any regulation of AI. That means states trying to shield children from AI-driven harm could lose out on the infrastructure dollars needed to connect underserved communities, like low-income and rural communities, to high-speed internet. It’s a cynical use of power: forcing states to choose between protecting children and connecting their most vulnerable communities to a vital resource.

Congress must abandon its pursuit of pleasing tech companies at the cost of child safety. At a minimum, Congress should strike this harmful, deeply flawed provision from the reconciliation bill. Children’s lives depend on it. If Congress wishes to play a constructive role, it should work toward setting a federal floor of protection while preserving states’ authority to go further. Very often, the best solutions to national problems come from experimentation and innovation within states. This is especially likely to be true in the complex and often confounding realm of emerging and rapidly developing technology. Allowing states—the “laboratories of democracy”—to take bold action to address the concerns of parents, children, and their communities may be the most efficient and effective way to make progress. We need Congress to work alongside and learn from state lawmakers in this endeavor, rather than standing in their way.

Jessica K. Heldman is a Fellmeth-Peterson associate professor in child rights and Melanie Delgado is a senior staff attorney at the Children’s Advocacy Institute at the University of San Diego School of Law.

Read More

The Misinformation We’re Missing: Why Real Videos Can Be More Dangerous Than Fake Ones

Many assume misinformation requires special effects or technical sophistication. In reality, much of it requires only timing, intent, and a caption.

Getty Images, d3sign

The Misinformation We’re Missing: Why Real Videos Can Be More Dangerous Than Fake Ones

Recently, videos circulated online that appeared to show Los Angeles engulfed in chaos: Marines clashing with protesters, cars ablaze, pallets of bricks staged for violence. The implication was clear, the city had been overtaken by insurrectionists.

The reality was far more contained. Much of the footage was either old, unrelated, or entirely misrepresented. A photo from a Malaysian construction site became “evidence” of a Soros-backed plot. Even a years-old video of burning police cars resurfaced with a new, false label.

Keep ReadingShow less
Activism in Free Press
The vital link between a healthy press and our republic
Getty Images

Activism in Free Press

“Media and technology are essential to our democracy” is the first statement that appears on Free Press’ website, a suitable introduction to an organization dedicated to reshaping the media landscape. Founded in 2003, Free Press was established to empower people to have a voice in the powerful decisions that shape how media and technology operate in society. Over the years, the media industry has undergone dramatic shifts, with corporate consolidation swallowing up local TV stations, radio outlets, and newspapers. This has led to a decline in independent journalism, resulting in the loss of numerous jobs for reporters, editors, and producers across the country.

Due to the Telecommunications Act of 1996, a piece of legislation that allows anyone to enter the communications business, it was up to Free Press to closely monitor decisions shaping the media landscape when people’s right to connect and communicate is in danger.

Keep ReadingShow less
Trump Administration Reverses Course: Nvidia Cleared to Export AI Chips to China

U.S. President Donald Trump talks to reporters from the Resolute Desk after signing an executive order to appoint the deputy administrator of the Federal Aviation Administration in the Oval Office at the White House on January 30, 2025 in Washington, DC.

Getty Images, Chip Somodevilla

Trump Administration Reverses Course: Nvidia Cleared to Export AI Chips to China

Nvidia, now the largest corporation in the world, just received the green light from the Trump administration to resume sales of its H20 AI chips to China—marking a dramatic reversal from April’s export restrictions.

The H20 Chip and Its Limits

Keep ReadingShow less
Ten Things the Future Will Say We Got Wrong About AI

A team of

Getty Images, Dragos Condrea

Ten Things the Future Will Say We Got Wrong About AI

As we look back on 1776 after this July 4th holiday, it's a good opportunity to skip forward and predict what our forebears will think of us. When our descendants assess our policies, ideas, and culture, what will they see? What errors, born of myopia, inertia, or misplaced priorities, will they lay at our feet regarding today's revolutionary technology—artificial intelligence? From their vantage point, with AI's potential and perils laid bare, their evaluation will likely determine that we got at least ten things wrong.

One glaring failure will be our delay in embracing obviously superior AI-driven technologies like autonomous vehicles (AVs). Despite the clear safety benefits—tens of thousands of lives saved annually, reduced congestion, enhanced accessibility—we allowed a patchwork of outdated regulations, public apprehension, and corporate squabbling to keep these life-saving machines largely off our roads. The future will see our hesitation as a moral and economic misstep, favoring human error over demonstrated algorithmic superiority.

Keep ReadingShow less