Imagine it is 2028. A start-up in St. Louis trains an AI model that can spot pancreatic cancer six months earlier than the best radiologists, buying patients precious time that medicine has never been able to give them. But the model never leaves the lab. Why? Because a well-intentioned, technology-neutral state statute drafted in 2025 forces every “automated decision system” to undergo a one-size-fits-all bias audit, to be repeated annually, and to be performed only by outside experts who—three years in—still do not exist in sufficient numbers. While regulators scramble, the company’s venture funding dries up, the founders decamp to Singapore, and thousands of Americans are deprived of an innovation that would have saved their lives.
That grim vignette is fictional—so far. But it is the predictable destination of the seven “deadly sins” that already haunt our AI policy debates. Reactive politicians are at risk of passing laws that fly in the face of what qualifies as good policy for emerging technologies.
Policymakers rightly sense that AI is moving faster than the statutory machinery built for the age of the horse and buggy, not the supercomputer. The temptation is to act first and reflect later. Yet history tells us that bad tech laws ossify, spread, and strangle progress long after their drafters leave office. California’s flame-retardant fiasco—one state’s sofa rule turned nationwide toxin—is Exhibit A. As of 1975, the state’s Bureau of Home Furnishings and Thermal Insulation insisted on flame retardant being included in certain furniture. Companies across the country compiled; it was cheaper to design their products for California’s standards rather than segment their manufacturing processes. Turns out that the flame-retardant foam was highly toxic and highly prone to end up in the hands and mouths of kids. It’s unclear how many hundreds or thousands of kids have suffered severe health issues as a result. Yet, the law remained on the books for decades. If we repeat that regulatory playbook for AI, we will not merely ruin couches; we will foreclose entire classes of life-improving algorithms.
The best way to avoid missing out on a better future due to bad laws is to identify and call out bad policy habits as soon as possible. With that in mind, lawmarks should avoid all seven of these sins and take care to instead adopt more flexible and evidence-based provisions.
- Mistaking “Tech-Neutral” for “Future-Proof.”
Imagine a statute that lumps diagnostic AIs with chatbot toys. This broad definition will invite litigation and paralyze AI development. Antidote: regulate by context, not by buzzword. Write rules tailored to specific use cases—health care, hiring, criminal justice—so innovators in low-risk domains are not collateral damage. - Legislating Without an Expiration Date.
The first draft of a law regulating emerging tech should never be the last word. Antidote: bake in sunset clauses that force lawmakers to revisit, revise, or repeal once real-world data rolls in. - Skipping Retrospective Review.
Passing a law is easy; measuring whether it works is hard. Antidote: mandate evidence audits—independent studies delivered to the legislature on a fixed schedule, coupled with automatic triggers for amendment when objectives are missed. - Exporting One State’s Preferences to the Nation.
When a single market as large as California or New York sets rules for all AI training data, the other 48 states lose their voice. Antidote: respect constitutional lanes. States should focus on local deployment (police facial recognition, school tutoring tools) and leave interstate questions—model training, cross-border data flows—to Congress. - Building Regulatory Castles on Sand—No Capacity, No Credibility.
Agencies cannot police AI with a dozen lawyers and programmers on the verge of retirement. Antidote: appropriate real money and real talent before—or at least alongside—new mandates. Offer fellowships, competitive salaries, and partnerships with land-grant universities to create a pipeline of public-interest AI experts. - Letting the Usual Suspects Dominate the Microphone.
If the only people in the room are professors, Beltway lobbyists, and Bay-Area founders, policy will skew toward their priors. Antidote: institutionalize broader participation—labor unions, rural hospitals, start-ups from the Midwest—through citizen advisory panels and notice-and-comment processes that actively seek out non-elite voices. - Confusing Speed with Progress.
The greatest danger is not under-regulation; it is freezing innovation before we understand its upside. Antidote: adopt a research-first posture. Fund testbeds, regulatory sandboxes, and pilot programs that let society learn in controlled environments before slapping on handcuffs.
Taken together, these antidotes form a simple governing philosophy: regulate like a scientist, not like a fortune-teller. Start narrow. Measure relentlessly. Revise or repeal when evidence demands it. And always, always weigh the cost of forgone breakthroughs—lives un-saved, jobs un-created, problems unsolved—against the speculative harms that dominate headlines.
The payoff? A legal environment where responsible innovators can move fast and fix things, where regulators are nimble rather than reactive, and where the public enjoys both the fruits of AI and meaningful protection from its risks. We need not choose between innovation and accountability. We only need the discipline to avoid the seven sins—and the imagination to envision what humanity loses if we fail.
The final word? If my cancer-spotting start-up withers in a tangle of red tape, the obituaries will never say, “Killed by a visionary legislature.” They will simply say, “Cure delayed.” Our charge as lawyers and policymakers is to ensure that sentence never gets written. By exorcising the seven deadly sins of AI policy now, we can safeguard both the public and the next generation of world-changing ideas. The clock is ticking—let’s legislate with humility, measure with rigor, and keep the door open to the innovations we cannot yet imagine.
Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.