Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Avoiding Policy Malpractice in the Age of AI

Opinion

Avoiding Policy Malpractice in the Age of AI

"The stakes of AI policymaking are too high and the risks of getting it wrong are too enduring for lawmakers to legislate on instinct alone," explains Kevin Frazier.

Getty Images, Aitor Diago

Nature abhors a vacuum, rushing to fill it often chaotically. Policymakers, similarly, dislike a regulatory void. The urge to fill it with new laws is strong, frequently leading to shortsighted legislation. There's a common, if flawed, belief that "any law is better than no law." This action bias—our predisposition to do something rather than nothing—might be forgivable in some contexts, but not when it comes to artificial intelligence.

Regardless of one's stance on AI regulation, we should all agree that only effective policy deserves to stay on the books. The consequences of missteps in AI policy at this early stage are too severe to entrench poorly designed proposals into law. Once enacted, laws tend to persist. We even have a term for them: zombie laws. These are "statutes, regulations, and judicial precedents that continue to apply after their underlying economic and legal bases dissipate," as defined by Professor Joshua Macey.


Such laws are more common than we’d like to admit. Consider a regulation requiring truck drivers to place visibility triangles around their rigs when parked. This seemingly minor rule becomes a barrier to autonomous trucking, as there's no driver to deploy the triangles. A simple, commonsense solution, like integrating high-visibility markers into the trucks themselves, exists, yet the outdated regulation persists. Another example is the FDA's attempt to help allergy sufferers by requiring sesame labeling. Rather than simply labeling, many food producers responded by adding sesame to more foods to avoid non-compliance, a comical and wasteful regulatory backfire.

Similar legislative missteps are highly likely in the AI space. With Congress declining to impose a moratorium, state legislatures across the country are rapidly pursuing AI proposals. Hundreds of AI-related bills are pending, addressing everything from broad, catastrophic harms to specific issues such as deepfakes in elections.

The odds of any of these bills getting it "right" are uncertain. AI is a particularly challenging technology to regulate for several reasons: even its creators aren't sure how and why their models behave; early adopters are still figuring out AI’s utility and limitations; no one can predict how current regulations will influence AI's development; and we're left guessing how adversaries will approach similar regulatory questions.

Given these complexities, legislators must adopt a posture of regulatory humility. States that enact well-intentioned regulations leading to predictable negative consequences are engaging in legislative malpractice. I choose these words deliberately. Policymakers and their staff should know better, recognizing the extensive list of tools available to prevent bad laws from becoming permanent.

Malpractice occurs when a professional fails to adhere to the basic tenets of their field. Legal malpractice, for instance, involves "evil practice in a professional capacity, and the resort to methods and practices unsanctioned and prohibited by law." In medicine, doctors are held to a standard of care reflecting what a "minimally competent physician in the same field would do under similar circumstances."

While policymaking lacks a formalized duty of care or professional conduct code, we're not entirely without guidance. A related concept, though less familiar, offers a starting point: maladministration.

Maladministration encompasses "administrative action (or inaction) based on or influenced by improper considerations or conduct," indicating when "things are going wrong, mistakes are being made, and justifiable grievances are being ignored." While typically applied to administrative agencies and politicians, as the creators of such systems, they bear responsibility for anticipating and correcting these mistakes.

Given the inherent difficulties of regulating AI, policymakers should, at a minimum, demonstrate consideration of three key tools to reduce the odds of enacting misguided regulations. These tools align with core democratic values, ensuring policy promotes the common good.

First is experimental policy design via randomized control trials (RCTs). Legislators shouldn't assume one best way to test AI models or report their training. Instead, they should build experimentation into legislation. Some labs might follow steps A, B, and C, while others follow X, Y, and Z. The legislature can then assess which provisions work best, ideally transitioning all regulated entities to superior practices or amending the law. This fosters innovation in regulatory methods.

Second are sunrise clauses. These delay enforcement until prerequisites—basic conditions of good governance—are met. Unlike a simple future effective date, a true sunrise clause imposes a checklist: Is the implementing agency staffed and funded? Have regulated entities been consulted? Do stakeholders understand compliance? In AI policy, these questions are urgent. Enforcing complex laws before infrastructure exists is inefficient and undermines legitimacy. A sunrise clause ensures laws "land" effectively, demanding competence before policy becomes an enforceable rule. This promotes transparency and accountability.

Third are sunset clauses. If sunrise clauses delay a start, sunset clauses enforce an end unless actively renewed. This is critical for fast-evolving technologies. A sunset clause builds in mandatory reassessment: "This law expires in two years unless renewed." This isn't laziness; it’s disciplined humility. AI regulation shouldn't outlive its usefulness, and sunset clauses ensure laws earn their permanence, preventing outdated assumptions from locking in.

The stakes of AI policymaking are too high and the risks of getting it wrong are too enduring for lawmakers to legislate on instinct alone. While action bias is human, embedding it in law is neither excusable nor sustainable. At this early, uncertain stage of AI development, policymakers have a rare opportunity: to regulate with foresight, humility, and discipline.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.


Read More

Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less
AI - Its Use, Misuse, and Regulation
Glowing ai chip on a circuit board.
Photo by Immo Wegmann on Unsplash

AI - Its Use, Misuse, and Regulation

There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.

First, the good uses. There is no question that AI can outthink human beings, regardless of how famous or knowledgeable, because of the amount of information it can process in a short amount of time. The most powerful accounts I've read have been in the field of medical research: doctors have fed facts into AI, asking for a diagnosis or a possible remedy, and AI has come up with remarkable answers beyond the human mind's capability.

Keep ReadingShow less