Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Avoiding Policy Malpractice in the Age of AI

Opinion

Avoiding Policy Malpractice in the Age of AI

"The stakes of AI policymaking are too high and the risks of getting it wrong are too enduring for lawmakers to legislate on instinct alone," explains Kevin Frazier.

Getty Images, Aitor Diago

Nature abhors a vacuum, rushing to fill it often chaotically. Policymakers, similarly, dislike a regulatory void. The urge to fill it with new laws is strong, frequently leading to shortsighted legislation. There's a common, if flawed, belief that "any law is better than no law." This action bias—our predisposition to do something rather than nothing—might be forgivable in some contexts, but not when it comes to artificial intelligence.

Regardless of one's stance on AI regulation, we should all agree that only effective policy deserves to stay on the books. The consequences of missteps in AI policy at this early stage are too severe to entrench poorly designed proposals into law. Once enacted, laws tend to persist. We even have a term for them: zombie laws. These are "statutes, regulations, and judicial precedents that continue to apply after their underlying economic and legal bases dissipate," as defined by Professor Joshua Macey.


Such laws are more common than we’d like to admit. Consider a regulation requiring truck drivers to place visibility triangles around their rigs when parked. This seemingly minor rule becomes a barrier to autonomous trucking, as there's no driver to deploy the triangles. A simple, commonsense solution, like integrating high-visibility markers into the trucks themselves, exists, yet the outdated regulation persists. Another example is the FDA's attempt to help allergy sufferers by requiring sesame labeling. Rather than simply labeling, many food producers responded by adding sesame to more foods to avoid non-compliance, a comical and wasteful regulatory backfire.

Similar legislative missteps are highly likely in the AI space. With Congress declining to impose a moratorium, state legislatures across the country are rapidly pursuing AI proposals. Hundreds of AI-related bills are pending, addressing everything from broad, catastrophic harms to specific issues such as deepfakes in elections.

The odds of any of these bills getting it "right" are uncertain. AI is a particularly challenging technology to regulate for several reasons: even its creators aren't sure how and why their models behave; early adopters are still figuring out AI’s utility and limitations; no one can predict how current regulations will influence AI's development; and we're left guessing how adversaries will approach similar regulatory questions.

Given these complexities, legislators must adopt a posture of regulatory humility. States that enact well-intentioned regulations leading to predictable negative consequences are engaging in legislative malpractice. I choose these words deliberately. Policymakers and their staff should know better, recognizing the extensive list of tools available to prevent bad laws from becoming permanent.

Malpractice occurs when a professional fails to adhere to the basic tenets of their field. Legal malpractice, for instance, involves "evil practice in a professional capacity, and the resort to methods and practices unsanctioned and prohibited by law." In medicine, doctors are held to a standard of care reflecting what a "minimally competent physician in the same field would do under similar circumstances."

While policymaking lacks a formalized duty of care or professional conduct code, we're not entirely without guidance. A related concept, though less familiar, offers a starting point: maladministration.

Maladministration encompasses "administrative action (or inaction) based on or influenced by improper considerations or conduct," indicating when "things are going wrong, mistakes are being made, and justifiable grievances are being ignored." While typically applied to administrative agencies and politicians, as the creators of such systems, they bear responsibility for anticipating and correcting these mistakes.

Given the inherent difficulties of regulating AI, policymakers should, at a minimum, demonstrate consideration of three key tools to reduce the odds of enacting misguided regulations. These tools align with core democratic values, ensuring policy promotes the common good.

First is experimental policy design via randomized control trials (RCTs). Legislators shouldn't assume one best way to test AI models or report their training. Instead, they should build experimentation into legislation. Some labs might follow steps A, B, and C, while others follow X, Y, and Z. The legislature can then assess which provisions work best, ideally transitioning all regulated entities to superior practices or amending the law. This fosters innovation in regulatory methods.

Second are sunrise clauses. These delay enforcement until prerequisites—basic conditions of good governance—are met. Unlike a simple future effective date, a true sunrise clause imposes a checklist: Is the implementing agency staffed and funded? Have regulated entities been consulted? Do stakeholders understand compliance? In AI policy, these questions are urgent. Enforcing complex laws before infrastructure exists is inefficient and undermines legitimacy. A sunrise clause ensures laws "land" effectively, demanding competence before policy becomes an enforceable rule. This promotes transparency and accountability.

Third are sunset clauses. If sunrise clauses delay a start, sunset clauses enforce an end unless actively renewed. This is critical for fast-evolving technologies. A sunset clause builds in mandatory reassessment: "This law expires in two years unless renewed." This isn't laziness; it’s disciplined humility. AI regulation shouldn't outlive its usefulness, and sunset clauses ensure laws earn their permanence, preventing outdated assumptions from locking in.

The stakes of AI policymaking are too high and the risks of getting it wrong are too enduring for lawmakers to legislate on instinct alone. While action bias is human, embedding it in law is neither excusable nor sustainable. At this early, uncertain stage of AI development, policymakers have a rare opportunity: to regulate with foresight, humility, and discipline.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.


Read More

A U.S. flag flying before congress. Visual representation of technology, a glitch, artificial intelligence
As AI reshapes jobs and politics, America faces a choice: resist automation or embrace innovation. The path to prosperity lies in AI literacy and adaptability.
Getty Images, Douglas Rissing

Why Should I Be Worried About AI?

For many people, the current anxiety about artificial intelligence feels overblown. They say, “We’ve been here before.” Every generation has its technological scare story. In the early days of automation, factories threatened jobs. Television was supposed to rot our brains. The internet was going to end serious thinking. Kurt Vonnegut’s Player Piano, published in 1952, imagined a world run by machines and technocrats, leaving ordinary humans purposeless and sidelined. We survived all of that.

So when people today warn that AI is different — that it poses risks to democracy, work, truth, our ability to make informed and independent choices — it’s reasonable to ask: Why should I care?

Keep ReadingShow less
A person on their phone, using a type of artificial intelligence.

AI-generated “nudification” is no longer a distant threat—it’s harming students now. As deepfake pornography spreads in schools nationwide, educators are left to confront a growing crisis that outpaces laws, platforms, and parental awareness.

Getty Images, d3sign

How AI Deepfakes in Classrooms Expose a Crisis of Accountability and Civic Trust

While public outrage flares when AI tools like Elon Musk’s Grok generate sexualized images of adults on X—often without consent—schools have been dealing with this harm for years. For school-aged children, AI-generated “nudification” is not a future threat or an abstract tech concern; it is already shaping their daily lives.

Last month, that reality became impossible to ignore in Lafourche Parish, Louisiana. A father sued the school district after several middle school boys circulated AI-generated pornographic images of eight female classmates, including his 13-year-old daughter. When the girl confronted one of the boys and punched him on a school bus, she was expelled. The boy who helped create and spread the images faced no formal consequences.

Keep ReadingShow less
Democracies Don’t Collapse in Silence; They Collapse When Truth Is Distorted or Denied
a remote control sitting in front of a television
Photo by Pinho . on Unsplash

Democracies Don’t Collapse in Silence; They Collapse When Truth Is Distorted or Denied

Even with the full protection of the First Amendment, the free press in America is at risk. When a president works tirelessly to silence journalists, the question becomes unavoidable: What truth is he trying to keep the country from seeing? What is he covering up or trying to hide?

Democracies rarely fall in a single moment; they erode through a thousand small silences that go unchallenged. When citizens can no longer see or hear the truth — or when leaders manipulate what the public is allowed to know — the foundation of self‑government begins to crack long before the structure falls. When truth becomes negotiable, democracy becomes vulnerable — not because citizens stop caring, but because they stop receiving the information they need to act.

Keep ReadingShow less
A close up of a person's hands typing on a laptop.

As AI reshapes the labor market, workers must think like entrepreneurs. Explore skills gaps, apprenticeships, and policy reforms shaping the future of work.

Getty Images, Maria Korneeva

We’re All Entrepreneurs Now: Learning, Pivoting, and Thriving the Age of AI

What do a recent grad, a disenchanted employee, and a parent returning to the workforce all have in common? They’re each trying to determine which skills are in demand and how they can convince employers that they are competent in those fields. This is easier said than done.

Recent grads point to transcripts lined with As to persuade firms that they can add value. Firms, well aware of grade inflation, may scoff.

Keep ReadingShow less