Skip to content
Search

Latest Stories

Follow Us:
Top Stories

When Good Intentions Kill Cures: A Warning on AI Regulation

Opinion

When Good Intentions Kill Cures: A Warning on AI Regulation

Kevin Frazier warns that one-size-fits-all AI laws risk stifling innovation. Learn the 7 “sins” policymakers must avoid to protect progress.

Getty Images, Aitor Diago

Imagine it is 2028. A start-up in St. Louis trains an AI model that can spot pancreatic cancer six months earlier than the best radiologists, buying patients precious time that medicine has never been able to give them. But the model never leaves the lab. Why? Because a well-intentioned, technology-neutral state statute drafted in 2025 forces every “automated decision system” to undergo a one-size-fits-all bias audit, to be repeated annually, and to be performed only by outside experts who—three years in—still do not exist in sufficient numbers. While regulators scramble, the company’s venture funding dries up, the founders decamp to Singapore, and thousands of Americans are deprived of an innovation that would have saved their lives.

That grim vignette is fictional—so far. But it is the predictable destination of the seven “deadly sins” that already haunt our AI policy debates. Reactive politicians are at risk of passing laws that fly in the face of what qualifies as good policy for emerging technologies.


Policymakers rightly sense that AI is moving faster than the statutory machinery built for the age of the horse and buggy, not the supercomputer. The temptation is to act first and reflect later. Yet history tells us that bad tech laws ossify, spread, and strangle progress long after their drafters leave office. California’s flame-retardant fiasco—one state’s sofa rule turned nationwide toxin—is Exhibit A. As of 1975, the state’s Bureau of Home Furnishings and Thermal Insulation insisted on flame retardant being included in certain furniture. Companies across the country compiled; it was cheaper to design their products for California’s standards rather than segment their manufacturing processes. Turns out that the flame-retardant foam was highly toxic and highly prone to end up in the hands and mouths of kids. It’s unclear how many hundreds or thousands of kids have suffered severe health issues as a result. Yet, the law remained on the books for decades. If we repeat that regulatory playbook for AI, we will not merely ruin couches; we will foreclose entire classes of life-improving algorithms.

The best way to avoid missing out on a better future due to bad laws is to identify and call out bad policy habits as soon as possible. With that in mind, lawmarks should avoid all seven of these sins and take care to instead adopt more flexible and evidence-based provisions.

  1. Mistaking “Tech-Neutral” for “Future-Proof.”
    Imagine a statute that lumps diagnostic AIs with chatbot toys. This broad definition will invite litigation and paralyze AI development. Antidote: regulate by context, not by buzzword. Write rules tailored to specific use cases—health care, hiring, criminal justice—so innovators in low-risk domains are not collateral damage.
  2. Legislating Without an Expiration Date.
    The first draft of a law regulating emerging tech should never be the last word. Antidote: bake in sunset clauses that force lawmakers to revisit, revise, or repeal once real-world data rolls in.
  3. Skipping Retrospective Review.
    Passing a law is easy; measuring whether it works is hard. Antidote: mandate evidence audits—independent studies delivered to the legislature on a fixed schedule, coupled with automatic triggers for amendment when objectives are missed.
  4. Exporting One State’s Preferences to the Nation.
    When a single market as large as California or New York sets rules for all AI training data, the other 48 states lose their voice. Antidote: respect constitutional lanes. States should focus on local deployment (police facial recognition, school tutoring tools) and leave interstate questions—model training, cross-border data flows—to Congress.
  5. Building Regulatory Castles on Sand—No Capacity, No Credibility.
    Agencies cannot police AI with a dozen lawyers and programmers on the verge of retirement. Antidote: appropriate real money and real talent before—or at least alongside—new mandates. Offer fellowships, competitive salaries, and partnerships with land-grant universities to create a pipeline of public-interest AI experts.
  6. Letting the Usual Suspects Dominate the Microphone.
    If the only people in the room are professors, Beltway lobbyists, and Bay-Area founders, policy will skew toward their priors. Antidote: institutionalize broader participation—labor unions, rural hospitals, start-ups from the Midwest—through citizen advisory panels and notice-and-comment processes that actively seek out non-elite voices.
  7. Confusing Speed with Progress.
    The greatest danger is not under-regulation; it is freezing innovation before we understand its upside. Antidote: adopt a research-first posture. Fund testbeds, regulatory sandboxes, and pilot programs that let society learn in controlled environments before slapping on handcuffs.

Taken together, these antidotes form a simple governing philosophy: regulate like a scientist, not like a fortune-teller. Start narrow. Measure relentlessly. Revise or repeal when evidence demands it. And always, always weigh the cost of forgone breakthroughs—lives un-saved, jobs un-created, problems unsolved—against the speculative harms that dominate headlines.

The payoff? A legal environment where responsible innovators can move fast and fix things, where regulators are nimble rather than reactive, and where the public enjoys both the fruits of AI and meaningful protection from its risks. We need not choose between innovation and accountability. We only need the discipline to avoid the seven sins—and the imagination to envision what humanity loses if we fail.

The final word? If my cancer-spotting start-up withers in a tangle of red tape, the obituaries will never say, “Killed by a visionary legislature.” They will simply say, “Cure delayed.” Our charge as lawyers and policymakers is to ensure that sentence never gets written. By exorcising the seven deadly sins of AI policy now, we can safeguard both the public and the next generation of world-changing ideas. The clock is ticking—let’s legislate with humility, measure with rigor, and keep the door open to the innovations we cannot yet imagine.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

Affordability Crisis and AI: Kelso’s Universal Capitalism

Rising costs, AI disruption, and inequality revive interest in Louis Kelso’s “universal capitalism” as a market-based answer to the affordability crisis.

Getty Images, J Studios

Affordability Crisis and AI: Kelso’s Universal Capitalism

“Affordability” over the cost of living has been in the news a lot lately. It’s popping up in political campaigns, from the governor’s races in New Jersey and Virginia to the mayor’s races in New York City and Seattle. President Donald Trump calls the term a “hoax” and a “con job” by Democrats, and it’s true that the inflation rate hasn’t increased much since Trump began his second term in January.

But a number of reports show Americans are struggling with high costs for essentials like food, housing, and utilities, leaving many families feeling financially pinched. Total consumer spending over the Black Friday-Thanksgiving weekend buying binge actually increased this year, but a Salesforce study found that’s because prices were about 7% higher than last year’s blitz. Consumers actually bought 2% fewer items at checkout.

Keep ReadingShow less
Censorship Should Be Obsolete by Now. Why Isn’t It?

US Capital with tech background

Greggory DiSalvo/Getty Images

Censorship Should Be Obsolete by Now. Why Isn’t It?

Techies, activists, and academics were in Paris this month to confront the doom scenario of internet shutdowns, developing creative technology and policy solutions to break out of heavily censored environments. The event– SplinterCon– has previously been held globally, from Brussels to Taiwan. I am on the programme committee and delivered a keynote at the inaugural SplinterCon in Montreal on how internet standards must be better designed for censorship circumvention.

Censorship and digital authoritarianism were exposed in dozens of countries in the recently published Freedom on the Net report. For exampl,e Russia has pledged to provide “sovereign AI,” a strategy that will surely extend its network blocks on “a wide array of social media platforms and messaging applications, urging users to adopt government-approved alternatives.” The UK joined Vietnam, China, and a growing number of states requiring “age verification,” the use of government-issued identification cards, to access internet services, which the report calls “a crisis for online anonymity.”

Keep ReadingShow less
The concept of AI hovering among the public.

Panic-driven legislation—from airline safety to AI bans—often backfires, and evidence must guide policy.

Getty Images, J Studios

Beware of Panic Policies

"As far as human nature is concerned, with panic comes irrationality." This simple statement by Professor Steve Calandrillo and Nolan Anderson has profound implications for public policy. When panic is highest, and demand for reactive policy is greatest, that's exactly when we need our lawmakers to resist the temptation to move fast and ban things. Yet, many state legislators are ignoring this advice amid public outcries about the allegedly widespread and destructive uses of AI. Thankfully, Calandrillo and Anderson have identified a few examples of what I'll call "panic policies" that make clear that proposals forged by frenzy tend not to reflect good public policy.

Let's turn first to a proposal in November of 2001 from the American Academy of Pediatrics (AAP). For obvious reasons, airline safety was subject to immense public scrutiny at this time. AAP responded with what may sound like a good idea: require all infants to have their own seat and, by extension, their own seat belt on planes. The existing policy permitted parents to simply put their kid--so long as they were under two--on their lap. Essentially, babies flew for free.

The Federal Aviation Administration (FAA) permitted this based on a pretty simple analysis: the risks to young kids without seatbelts on planes were far less than the risks they would face if they were instead traveling by car. Put differently, if parents faced higher prices to travel by air, then they'd turn to the road as the best way to get from A to B. As we all know (perhaps with the exception of the AAP at the time), airline travel is tremendously safer than travel by car. Nevertheless, the AAP forged ahead with its proposal. In fact, it did so despite admitting that they were unsure of whether the higher risks of mortality of children under two in plane crashes were due to the lack of a seat belt or the fact that they're simply fragile.

Keep ReadingShow less
Will Generative AI Robots Replace Surgeons?

Generative AI and surgical robotics are advancing toward autonomous surgery, raising new questions about safety, regulation, payment models, and trust.

Getty Images, Luis Alvarez

Will Generative AI Robots Replace Surgeons?

In medicine’s history, the best technologies didn’t just improve clinical practice. They turned traditional medicine on its head.

For example, advances like CT, MRI, and ultrasound machines did more than merely improve diagnostic accuracy. They diminished the importance of the physical exam and the physicians who excelled at it.

Keep ReadingShow less