Skip to content
Search

Latest Stories

Follow Us:
Top Stories

When Good Intentions Kill Cures: A Warning on AI Regulation

Opinion

When Good Intentions Kill Cures: A Warning on AI Regulation

Kevin Frazier warns that one-size-fits-all AI laws risk stifling innovation. Learn the 7 “sins” policymakers must avoid to protect progress.

Getty Images, Aitor Diago

Imagine it is 2028. A start-up in St. Louis trains an AI model that can spot pancreatic cancer six months earlier than the best radiologists, buying patients precious time that medicine has never been able to give them. But the model never leaves the lab. Why? Because a well-intentioned, technology-neutral state statute drafted in 2025 forces every “automated decision system” to undergo a one-size-fits-all bias audit, to be repeated annually, and to be performed only by outside experts who—three years in—still do not exist in sufficient numbers. While regulators scramble, the company’s venture funding dries up, the founders decamp to Singapore, and thousands of Americans are deprived of an innovation that would have saved their lives.

That grim vignette is fictional—so far. But it is the predictable destination of the seven “deadly sins” that already haunt our AI policy debates. Reactive politicians are at risk of passing laws that fly in the face of what qualifies as good policy for emerging technologies.


Policymakers rightly sense that AI is moving faster than the statutory machinery built for the age of the horse and buggy, not the supercomputer. The temptation is to act first and reflect later. Yet history tells us that bad tech laws ossify, spread, and strangle progress long after their drafters leave office. California’s flame-retardant fiasco—one state’s sofa rule turned nationwide toxin—is Exhibit A. As of 1975, the state’s Bureau of Home Furnishings and Thermal Insulation insisted on flame retardant being included in certain furniture. Companies across the country compiled; it was cheaper to design their products for California’s standards rather than segment their manufacturing processes. Turns out that the flame-retardant foam was highly toxic and highly prone to end up in the hands and mouths of kids. It’s unclear how many hundreds or thousands of kids have suffered severe health issues as a result. Yet, the law remained on the books for decades. If we repeat that regulatory playbook for AI, we will not merely ruin couches; we will foreclose entire classes of life-improving algorithms.

The best way to avoid missing out on a better future due to bad laws is to identify and call out bad policy habits as soon as possible. With that in mind, lawmarks should avoid all seven of these sins and take care to instead adopt more flexible and evidence-based provisions.

  1. Mistaking “Tech-Neutral” for “Future-Proof.”
    Imagine a statute that lumps diagnostic AIs with chatbot toys. This broad definition will invite litigation and paralyze AI development. Antidote: regulate by context, not by buzzword. Write rules tailored to specific use cases—health care, hiring, criminal justice—so innovators in low-risk domains are not collateral damage.
  2. Legislating Without an Expiration Date.
    The first draft of a law regulating emerging tech should never be the last word. Antidote: bake in sunset clauses that force lawmakers to revisit, revise, or repeal once real-world data rolls in.
  3. Skipping Retrospective Review.
    Passing a law is easy; measuring whether it works is hard. Antidote: mandate evidence audits—independent studies delivered to the legislature on a fixed schedule, coupled with automatic triggers for amendment when objectives are missed.
  4. Exporting One State’s Preferences to the Nation.
    When a single market as large as California or New York sets rules for all AI training data, the other 48 states lose their voice. Antidote: respect constitutional lanes. States should focus on local deployment (police facial recognition, school tutoring tools) and leave interstate questions—model training, cross-border data flows—to Congress.
  5. Building Regulatory Castles on Sand—No Capacity, No Credibility.
    Agencies cannot police AI with a dozen lawyers and programmers on the verge of retirement. Antidote: appropriate real money and real talent before—or at least alongside—new mandates. Offer fellowships, competitive salaries, and partnerships with land-grant universities to create a pipeline of public-interest AI experts.
  6. Letting the Usual Suspects Dominate the Microphone.
    If the only people in the room are professors, Beltway lobbyists, and Bay-Area founders, policy will skew toward their priors. Antidote: institutionalize broader participation—labor unions, rural hospitals, start-ups from the Midwest—through citizen advisory panels and notice-and-comment processes that actively seek out non-elite voices.
  7. Confusing Speed with Progress.
    The greatest danger is not under-regulation; it is freezing innovation before we understand its upside. Antidote: adopt a research-first posture. Fund testbeds, regulatory sandboxes, and pilot programs that let society learn in controlled environments before slapping on handcuffs.

Taken together, these antidotes form a simple governing philosophy: regulate like a scientist, not like a fortune-teller. Start narrow. Measure relentlessly. Revise or repeal when evidence demands it. And always, always weigh the cost of forgone breakthroughs—lives un-saved, jobs un-created, problems unsolved—against the speculative harms that dominate headlines.

The payoff? A legal environment where responsible innovators can move fast and fix things, where regulators are nimble rather than reactive, and where the public enjoys both the fruits of AI and meaningful protection from its risks. We need not choose between innovation and accountability. We only need the discipline to avoid the seven sins—and the imagination to envision what humanity loses if we fail.

The final word? If my cancer-spotting start-up withers in a tangle of red tape, the obituaries will never say, “Killed by a visionary legislature.” They will simply say, “Cure delayed.” Our charge as lawyers and policymakers is to ensure that sentence never gets written. By exorcising the seven deadly sins of AI policy now, we can safeguard both the public and the next generation of world-changing ideas. The clock is ticking—let’s legislate with humility, measure with rigor, and keep the door open to the innovations we cannot yet imagine.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

Two people looking at screens.

A case for optimism, risk-taking, and policy experimentation in the age of AI—and why pessimism threatens technological progress.

Getty Images, Andriy Onufriyenko

In Defense of AI Optimism

Society needs people to take risks. Entrepreneurs who bet on themselves create new jobs. Institutions that gamble with new processes find out best to integrate advances into modern life. Regulators who accept potential backlash by launching policy experiments give us a chance to devise laws that are based on evidence, not fear.

The need for risk taking is all the more important when society is presented with new technologies. When new tech arrives on the scene, defense of the status quo is the easier path--individually, institutionally, and societally. We are all predisposed to think that the calamities, ailments, and flaws we experience today--as bad as they may be--are preferable to the unknowns tied to tomorrow.

Keep ReadingShow less
Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less
Meta Undermining Trust but Verify through Paid Links
Facebook launches voting resource tool
Facebook launches voting resource tool

Meta Undermining Trust but Verify through Paid Links

Facebook is testing limits on shared external links, which would become a paid feature through their Meta Verified program, which costs $14.99 per month.

This change solidifies that verification badges are now meaningless signifiers. Yet it wasn’t always so; the verified internet was built to support participation and trust. Beginning with Twitter’s verification program launched in 2009, a checkmark next to a username indicated that an account had been verified to represent a notable person or official account for a business. We could believe that an elected official or a brand name was who they said they were online. When Twitter Blue, and later X Premium, began to support paid blue checkmarks in November of 2022, the visual identification of verification became deceptive. Think Fake Eli Lilly accounts posting about free insulin and impersonation accounts for Elon Musk himself.

This week’s move by Meta echoes changes at Twitter/X, despite the significant evidence that it leaves information quality and user experience in a worse place than before. Despite what Facebook says, all this tells anyone is that you paid.

Keep ReadingShow less