Skip to content
Search

Latest Stories

Follow Us:
Top Stories

AI Is Here. Our Laws Are Stuck in the Past.

Opinion

Closeup of Software engineering team engaged in problem-solving and code analysis

Closeup of Software engineering team engaged in problem-solving and code analysis.

Getty Images, MTStock Studio

Artificial intelligence (AI) promises a future once confined to science fiction: personalized medicine accounting for your specific condition, accelerated scientific discovery addressing the most difficult challenges, and reimagined public education designed around AI tutors suited to each student's learning style. We see glimpses of this potential on a daily basis. Yet, as AI capabilities surge forward at exponential speed, the laws and regulations meant to guide them remain anchored in the twentieth century (if not the nineteenth or eighteenth!). This isn't just inefficient; it's dangerously reckless.

For too long, our approach to governing new technologies, including AI, has been one of cautious incrementalism—trying to fit revolutionary tools into outdated frameworks. We debate how century-old privacy torts apply to vast AI training datasets, how liability rules designed for factory machines might cover autonomous systems, or how copyright law conceived for human authors handles AI-generated creations. We tinker around the edges, applying digital patches to analog laws.


This constant patching creates what we might call "legal tech debt." Imagine trying to run sophisticated AI software on a computer from the 1980s—it might technically boot up, but it will be slow, prone to crashing, and incapable of performing its intended function. Similarly, forcing AI into legal structures designed for a different technological era means we stifle its potential benefits while failing to adequately manage its risks. Outdated privacy rules hinder the development of AI for public good projects; ambiguous liability standards chill innovation in critical sectors; fragmented regulations create uncertainty and inefficiency.

Allowing this legal tech debt to accumulate isn't just about missed opportunities; It breeds public distrust when laws seem irrelevant to lived reality. It invites policy chaos, as seen with the frantic, often ineffective, attempts to regulate social media after years of neglect. It risks a future where transformative technology evolves haphazardly, governed by stopgap measures and reactive panic rather than thoughtful design. With AI, the stakes are simply too high for such recklessness.

We need a fundamentally different approach. Instead of incremental tinkering, we need bold, systemic change. We need to be willing to leapfrog—to bypass outdated frameworks and design legal and regulatory systems specifically for the age of AI.

What does this leapfrog approach look like? It requires three key shifts in thinking:

First, we must look ahead. Policymakers and experts need to engage seriously with plausible future scenarios for AI development, learning from the forecasting methods used by technologists. This isn’t about predicting the future with certainty but about understanding the range of possibilities—from accelerating breakthroughs to unexpected plateaus—and anticipating the legal pressures and opportunities each might create. We need to proactively identify which parts of our legal infrastructure are most likely to buckle under the strain of advanced AI.

Second, we must embrace fundamental redesign. Armed with foresight, we must be willing to propose and implement wholesale reforms, not just minor rule changes. If AI requires vast datasets for public benefit, perhaps we need entirely new data governance structures—like secure, publicly accountable data trusts or commons—rather than just carving out exceptions to FERPA or HIPAA. If AI can personalize education, perhaps we need to rethink rigid grade-based structures and accreditation standards, not just approve AI tutors within the old system. This requires political courage and a willingness to question long-held assumptions about how legal systems should operate.

Third, we must build in adaptability. Given the inherent uncertainty of AI’s trajectory, any new legal framework must be dynamic, not static. We need laws designed to evolve. This means incorporating mechanisms like mandatory periodic reviews tied to real-world outcomes, sunset clauses that force reconsideration of rules, specialized bodies empowered to update technical standards quickly, and even using AI itself to help monitor the effectiveness and impacts of regulations in real-time. We need systems that learn and adapt, preventing the accumulation of new tech debt.

Making this shift won't be easy. It demands a new level of ambition from our policymakers, a greater willingness among legal experts to think beyond established doctrines, and broader public engagement on the fundamental choices AI presents. But the alternative—continuing to muddle through with incremental fixes—is far riskier. It’s a path toward unrealized potential, unmanaged risks, and a future where technology outpaces our ability to govern it wisely.

AI offers incredible possibilities but realizing them requires more than just brilliant code. It requires an equally ambitious upgrade to our legal and regulatory operating system. It’s time to stop patching the past and start designing the future. It’s time to leapfrog.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.


Read More

Overbroad AI Export Controls Risk Forfeiting the AI Race
a black keyboard with a blue button on it

Overbroad AI Export Controls Risk Forfeiting the AI Race

The nation that wins the global AI race will hold decisive military and economic advantages. That’s why President Trump’s January 2025 AI Action Plan declared: “It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

However, AI global dominance does not just mean producing the best AI systems. It also means that the American “AI Stack” – the layered collection of tools, technologies, and frameworks that organizations use to build, train, deploy, and manage artificial intelligence applications – will become the international standard for this world-changing technology. As such, advancing a commonsense export policy for American AI chips will play a decisive role in determining whether the United States remains embedded at the core of global AI development or is gradually displaced by rival systems.

Keep ReadingShow less
Digital generated image of green semi transparent AI word on white circuit board visualizing smart technology.

What can the success of SEMATECH teach us about winning the AI race? Explore how a bold U.S. public-private partnership revived the semiconductor industry—and why a similar model could be key to advancing AI innovation today.

Getty Images, Andriy Onufriyenko

A Proven Playbook for AI Leadership: Lessons from America’s Chip Comeback

Imagine waking up to this paragraph in your favorite newspaper:

The willingness of the U.S. government to eschew partisanship and undertake a bold experiment -- an experiment based on cooperation as opposed to traditional procurement, and with accountability standards rooted in trust instead of elaborate regulations -- has led the U.S. to a position of preeminence in an industry which is vital to our nation's security and economic well-being.

Keep ReadingShow less
A large group of people is depicted while invisible systems actively scan and analyze individuals within the crowd

Anthropic’s lawsuit against the Trump administration over a Pentagon “supply-chain risk” label raises major constitutional questions about AI policy, corporate speech, and political retaliation.

Getty Images, Flavio Coelho

Anthropic Sues Trump Over ‘Unlawful’ AI Retaliation

Anthropic’s dispute with the Trump administration is no longer just about AI policy; it has escalated into a constitutional test of whether American companies can uphold their values against political retaliation. After the administration labeled Anthropic a “supply‑chain risk”, a designation historically reserved for foreign adversaries, and ordered federal agencies to cease using its technology, the company did not yield. Instead, Anthropic filed two lawsuits: one in the Northern District of California and another in the D.C. Circuit, each challenging different aspects of the government’s actions and calling them “unprecedented and unlawful.”

The Pentagon has now formally issued the supply‑chain risk designation, triggering immediate cancellations of federal contracts and jeopardizing “hundreds of millions of dollars” in near‑term revenue. Anthropic’s filings describe the losses as “unrecoverable,” with reputational damage compounding the financial harm. Yet even as the government blacklists the company, the Pentagon continues using Claude in classified systems because the model is deeply embedded in wartime workflows. This contradiction underscores the political nature of the designation: a tool deemed too “dangerous” to be used by federal agencies is simultaneously indispensable in active military operations.

Keep ReadingShow less
An illustration of a person standing on a giant robotic hand.

As AI transforms the labor market, the U.S. faces a familiar challenge: preparing workers for new skills. A look at a 1991 Labor Department report reveals striking parallels.

Getty Images, Andriy Onufriyenko

We’ve Been "Preparing" for the Future Since 1991—It Hasn't Worked

“Today, the demands on business and workers are different. Firms must meet world-class standards, and so must workers. Employers seek adaptability and the ability to learn and work in teams.”

Sound familiar?

Keep ReadingShow less