Skip to content
Search

Latest Stories

Top Stories

When Good Intentions Kill Cures: A Warning on AI Regulation

Opinion

When Good Intentions Kill Cures: A Warning on AI Regulation

Kevin Frazier warns that one-size-fits-all AI laws risk stifling innovation. Learn the 7 “sins” policymakers must avoid to protect progress.

Getty Images, Aitor Diago

Imagine it is 2028. A start-up in St. Louis trains an AI model that can spot pancreatic cancer six months earlier than the best radiologists, buying patients precious time that medicine has never been able to give them. But the model never leaves the lab. Why? Because a well-intentioned, technology-neutral state statute drafted in 2025 forces every “automated decision system” to undergo a one-size-fits-all bias audit, to be repeated annually, and to be performed only by outside experts who—three years in—still do not exist in sufficient numbers. While regulators scramble, the company’s venture funding dries up, the founders decamp to Singapore, and thousands of Americans are deprived of an innovation that would have saved their lives.

That grim vignette is fictional—so far. But it is the predictable destination of the seven “deadly sins” that already haunt our AI policy debates. Reactive politicians are at risk of passing laws that fly in the face of what qualifies as good policy for emerging technologies.


Policymakers rightly sense that AI is moving faster than the statutory machinery built for the age of the horse and buggy, not the supercomputer. The temptation is to act first and reflect later. Yet history tells us that bad tech laws ossify, spread, and strangle progress long after their drafters leave office. California’s flame-retardant fiasco—one state’s sofa rule turned nationwide toxin—is Exhibit A. As of 1975, the state’s Bureau of Home Furnishings and Thermal Insulation insisted on flame retardant being included in certain furniture. Companies across the country compiled; it was cheaper to design their products for California’s standards rather than segment their manufacturing processes. Turns out that the flame-retardant foam was highly toxic and highly prone to end up in the hands and mouths of kids. It’s unclear how many hundreds or thousands of kids have suffered severe health issues as a result. Yet, the law remained on the books for decades. If we repeat that regulatory playbook for AI, we will not merely ruin couches; we will foreclose entire classes of life-improving algorithms.

The best way to avoid missing out on a better future due to bad laws is to identify and call out bad policy habits as soon as possible. With that in mind, lawmarks should avoid all seven of these sins and take care to instead adopt more flexible and evidence-based provisions.

  1. Mistaking “Tech-Neutral” for “Future-Proof.”
    Imagine a statute that lumps diagnostic AIs with chatbot toys. This broad definition will invite litigation and paralyze AI development. Antidote: regulate by context, not by buzzword. Write rules tailored to specific use cases—health care, hiring, criminal justice—so innovators in low-risk domains are not collateral damage.
  2. Legislating Without an Expiration Date.
    The first draft of a law regulating emerging tech should never be the last word. Antidote: bake in sunset clauses that force lawmakers to revisit, revise, or repeal once real-world data rolls in.
  3. Skipping Retrospective Review.
    Passing a law is easy; measuring whether it works is hard. Antidote: mandate evidence audits—independent studies delivered to the legislature on a fixed schedule, coupled with automatic triggers for amendment when objectives are missed.
  4. Exporting One State’s Preferences to the Nation.
    When a single market as large as California or New York sets rules for all AI training data, the other 48 states lose their voice. Antidote: respect constitutional lanes. States should focus on local deployment (police facial recognition, school tutoring tools) and leave interstate questions—model training, cross-border data flows—to Congress.
  5. Building Regulatory Castles on Sand—No Capacity, No Credibility.
    Agencies cannot police AI with a dozen lawyers and programmers on the verge of retirement. Antidote: appropriate real money and real talent before—or at least alongside—new mandates. Offer fellowships, competitive salaries, and partnerships with land-grant universities to create a pipeline of public-interest AI experts.
  6. Letting the Usual Suspects Dominate the Microphone.
    If the only people in the room are professors, Beltway lobbyists, and Bay-Area founders, policy will skew toward their priors. Antidote: institutionalize broader participation—labor unions, rural hospitals, start-ups from the Midwest—through citizen advisory panels and notice-and-comment processes that actively seek out non-elite voices.
  7. Confusing Speed with Progress.
    The greatest danger is not under-regulation; it is freezing innovation before we understand its upside. Antidote: adopt a research-first posture. Fund testbeds, regulatory sandboxes, and pilot programs that let society learn in controlled environments before slapping on handcuffs.

Taken together, these antidotes form a simple governing philosophy: regulate like a scientist, not like a fortune-teller. Start narrow. Measure relentlessly. Revise or repeal when evidence demands it. And always, always weigh the cost of forgone breakthroughs—lives un-saved, jobs un-created, problems unsolved—against the speculative harms that dominate headlines.

The payoff? A legal environment where responsible innovators can move fast and fix things, where regulators are nimble rather than reactive, and where the public enjoys both the fruits of AI and meaningful protection from its risks. We need not choose between innovation and accountability. We only need the discipline to avoid the seven sins—and the imagination to envision what humanity loses if we fail.

The final word? If my cancer-spotting start-up withers in a tangle of red tape, the obituaries will never say, “Killed by a visionary legislature.” They will simply say, “Cure delayed.” Our charge as lawyers and policymakers is to ensure that sentence never gets written. By exorcising the seven deadly sins of AI policy now, we can safeguard both the public and the next generation of world-changing ideas. The clock is ticking—let’s legislate with humility, measure with rigor, and keep the door open to the innovations we cannot yet imagine.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

Could Trump’s campaign against the media come back to bite conservatives?

US President Donald Trump reacts next to Erika Kirk, widow of Charlie Kirk, after speaking at the public memorial service for right-wing activist Charlie Kirk at State Farm Stadium in Glendale, Arizona, on September 21, 2025.

(Photo by Mandel NGAN / AFP) (Photo by MANDEL NGAN/AFP via Getty Images)

Could Trump’s campaign against the media come back to bite conservatives?

In the wake of Jimmy Kimmel’sapparently temporary— suspension from late-night TV, a (tragically small) number of prominent conservatives and Republicans have taken exception to the Trump administration’s comfort with “jawboning” critics into submission.

Sen. Ted Cruz condemned the administration’s “mafioso behavior.” He warned that “going down this road, there will come a time when a Democrat wins again — wins the White House … they will silence us.” Cruz added during his Friday podcast. “They will use this power, and they will use it ruthlessly. And that is dangerous.”

Keep ReadingShow less
Congress Bill Spotlight: No Social Media at School Act

Rep. Angie Craig’s No Social Media at School Act would ban TikTok, Instagram & Snapchat during K-12 school hours. See what’s in the bill.

Getty Images, Daniel de la Hoz

Congress Bill Spotlight: No Social Media at School Act

Gen Z’s worst nightmare: TikTok, Instagram, and Snapchat couldn’t be used during school hours.

What the bill does

Rep. Angie Craig (D-MN2) introduced the No Social Media at School Act, which would require social media companies to use “geofencing” to block access to their products on K-12 school grounds during school hours.

Keep ReadingShow less
On Live Facial Recognition in the City: We Are Not Guinea Pigs, and We Are Not Disposable

New Orleans fights a facial recognition ordinance as residents warn of privacy risks, mass surveillance, and threats to immigrant communities.

Getty Images, PhanuwatNandee

On Live Facial Recognition in the City: We Are Not Guinea Pigs, and We Are Not Disposable

Every day, I ride my bike down my block in Milan, a tight-knit residential neighborhood in central New Orleans. And every day, a surveillance camera follows me down the block.

Despite the rosy rhetoric of pro-surveillance politicians and facial recognition vendors, that camera doesn’t make me safer. In fact, it puts everyone in New Orleans at risk.

Keep ReadingShow less
The Manosphere Is Bad for Boys and Worse for Democracy
a skeleton sitting at a desk with a laptop and keyboard
Photo by Growtika on Unsplash

The Manosphere Is Bad for Boys and Worse for Democracy

15-year-old Owen Cooper made history to become the youngest male to win an Emmy Award. In the Netflix series Adolescence, Owen plays the role of a 13-year-old schoolboy who is arrested after the murder of a girl in his school. As we follow the events leading up to the crime, the award-winning series forces us to confront legitimate insecurities that many teenage boys face, from lack of physical prowess to emotional disconnection from their fathers. It also exposes how easily young men, seeking comfort in their computers, can be pulled into online spaces that normalize misogyny and rage; a pipeline enabled by a failure of tech policy.

At the center of this danger lies the manosphere: a global network of influencers whose words can radicalize young men and channel their frustrations into violence. But this is more than a social crisis affecting some young men. It is a growing threat to the democratic values of equality and tolerance that keep us all safe.

Keep ReadingShow less