Skip to content
Search

Latest Stories

Follow Us:
Top Stories

AI Is Here. Our Laws Are Stuck in the Past.

Opinion

Closeup of Software engineering team engaged in problem-solving and code analysis

Closeup of Software engineering team engaged in problem-solving and code analysis.

Getty Images, MTStock Studio

Artificial intelligence (AI) promises a future once confined to science fiction: personalized medicine accounting for your specific condition, accelerated scientific discovery addressing the most difficult challenges, and reimagined public education designed around AI tutors suited to each student's learning style. We see glimpses of this potential on a daily basis. Yet, as AI capabilities surge forward at exponential speed, the laws and regulations meant to guide them remain anchored in the twentieth century (if not the nineteenth or eighteenth!). This isn't just inefficient; it's dangerously reckless.

For too long, our approach to governing new technologies, including AI, has been one of cautious incrementalism—trying to fit revolutionary tools into outdated frameworks. We debate how century-old privacy torts apply to vast AI training datasets, how liability rules designed for factory machines might cover autonomous systems, or how copyright law conceived for human authors handles AI-generated creations. We tinker around the edges, applying digital patches to analog laws.


This constant patching creates what we might call "legal tech debt." Imagine trying to run sophisticated AI software on a computer from the 1980s—it might technically boot up, but it will be slow, prone to crashing, and incapable of performing its intended function. Similarly, forcing AI into legal structures designed for a different technological era means we stifle its potential benefits while failing to adequately manage its risks. Outdated privacy rules hinder the development of AI for public good projects; ambiguous liability standards chill innovation in critical sectors; fragmented regulations create uncertainty and inefficiency.

Allowing this legal tech debt to accumulate isn't just about missed opportunities; It breeds public distrust when laws seem irrelevant to lived reality. It invites policy chaos, as seen with the frantic, often ineffective, attempts to regulate social media after years of neglect. It risks a future where transformative technology evolves haphazardly, governed by stopgap measures and reactive panic rather than thoughtful design. With AI, the stakes are simply too high for such recklessness.

We need a fundamentally different approach. Instead of incremental tinkering, we need bold, systemic change. We need to be willing to leapfrog—to bypass outdated frameworks and design legal and regulatory systems specifically for the age of AI.

What does this leapfrog approach look like? It requires three key shifts in thinking:

First, we must look ahead. Policymakers and experts need to engage seriously with plausible future scenarios for AI development, learning from the forecasting methods used by technologists. This isn’t about predicting the future with certainty but about understanding the range of possibilities—from accelerating breakthroughs to unexpected plateaus—and anticipating the legal pressures and opportunities each might create. We need to proactively identify which parts of our legal infrastructure are most likely to buckle under the strain of advanced AI.

Second, we must embrace fundamental redesign. Armed with foresight, we must be willing to propose and implement wholesale reforms, not just minor rule changes. If AI requires vast datasets for public benefit, perhaps we need entirely new data governance structures—like secure, publicly accountable data trusts or commons—rather than just carving out exceptions to FERPA or HIPAA. If AI can personalize education, perhaps we need to rethink rigid grade-based structures and accreditation standards, not just approve AI tutors within the old system. This requires political courage and a willingness to question long-held assumptions about how legal systems should operate.

Third, we must build in adaptability. Given the inherent uncertainty of AI’s trajectory, any new legal framework must be dynamic, not static. We need laws designed to evolve. This means incorporating mechanisms like mandatory periodic reviews tied to real-world outcomes, sunset clauses that force reconsideration of rules, specialized bodies empowered to update technical standards quickly, and even using AI itself to help monitor the effectiveness and impacts of regulations in real-time. We need systems that learn and adapt, preventing the accumulation of new tech debt.

Making this shift won't be easy. It demands a new level of ambition from our policymakers, a greater willingness among legal experts to think beyond established doctrines, and broader public engagement on the fundamental choices AI presents. But the alternative—continuing to muddle through with incremental fixes—is far riskier. It’s a path toward unrealized potential, unmanaged risks, and a future where technology outpaces our ability to govern it wisely.

AI offers incredible possibilities but realizing them requires more than just brilliant code. It requires an equally ambitious upgrade to our legal and regulatory operating system. It’s time to stop patching the past and start designing the future. It’s time to leapfrog.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

Censorship Should Be Obsolete by Now. Why Isn’t It?

US Capital with tech background

Greggory DiSalvo/Getty Images

Censorship Should Be Obsolete by Now. Why Isn’t It?

Techies, activists, and academics were in Paris this week to confront the doom scenario of internet shutdowns, developing creative technology and policy solutions to break out of heavily censored environments. The event– SplinterCon– has previously been held globally, from Brussels to Taiwan. I am on the programme committee and delivered a keynote at the inaugural SplinterCon in Montreal on how internet standards must be better designed for censorship circumvention.

Censorship and digital authoritarianism were exposed in dozens of countries in the recently published Freedom on the Net report. For exampl,e Russia has pledged to provide “sovereign AI,” a strategy that will surely extend its network blocks on “a wide array of social media platforms and messaging applications, urging users to adopt government-approved alternatives.” The UK joined Vietnam, China, and a growing number of states requiring “age verification,” the use of government-issued identification cards, to access internet services, which the report calls “a crisis for online anonymity.”

Keep ReadingShow less
The concept of AI hovering among the public.

Panic-driven legislation—from airline safety to AI bans—often backfires, and evidence must guide policy.

Getty Images, J Studios

Beware of Panic Policies

"As far as human nature is concerned, with panic comes irrationality." This simple statement by Professor Steve Calandrillo and Nolan Anderson has profound implications for public policy. When panic is highest, and demand for reactive policy is greatest, that's exactly when we need our lawmakers to resist the temptation to move fast and ban things. Yet, many state legislators are ignoring this advice amid public outcries about the allegedly widespread and destructive uses of AI. Thankfully, Calandrillo and Anderson have identified a few examples of what I'll call "panic policies" that make clear that proposals forged by frenzy tend not to reflect good public policy.

Let's turn first to a proposal in November of 2001 from the American Academy of Pediatrics (AAP). For obvious reasons, airline safety was subject to immense public scrutiny at this time. AAP responded with what may sound like a good idea: require all infants to have their own seat and, by extension, their own seat belt on planes. The existing policy permitted parents to simply put their kid--so long as they were under two--on their lap. Essentially, babies flew for free.

The Federal Aviation Administration (FAA) permitted this based on a pretty simple analysis: the risks to young kids without seatbelts on planes were far less than the risks they would face if they were instead traveling by car. Put differently, if parents faced higher prices to travel by air, then they'd turn to the road as the best way to get from A to B. As we all know (perhaps with the exception of the AAP at the time), airline travel is tremendously safer than travel by car. Nevertheless, the AAP forged ahead with its proposal. In fact, it did so despite admitting that they were unsure of whether the higher risks of mortality of children under two in plane crashes were due to the lack of a seat belt or the fact that they're simply fragile.

Keep ReadingShow less
Will Generative AI Robots Replace Surgeons?

Generative AI and surgical robotics are advancing toward autonomous surgery, raising new questions about safety, regulation, payment models, and trust.

Getty Images, Luis Alvarez

Will Generative AI Robots Replace Surgeons?

In medicine’s history, the best technologies didn’t just improve clinical practice. They turned traditional medicine on its head.

For example, advances like CT, MRI, and ultrasound machines did more than merely improve diagnostic accuracy. They diminished the importance of the physical exam and the physicians who excelled at it.

Keep ReadingShow less
Digital Footprints Are Affecting This New Generation of Politicians, but Do Voters Care?

Hand holding smart phone with US flag case

Credit: Katareena Roska

Digital Footprints Are Affecting This New Generation of Politicians, but Do Voters Care?

WASHINGTON — In 2022, Jay Jones sent text messages to a former colleague about a senior state Republican in Virginia getting “two bullets to the head.”

When the texts were shared by his colleague a month before the Virginia general election, Jones, the Democratic candidate for attorney general, was slammed for the violent rhetoric. Winsome Earle-Sears, the Republican candidate for governor, called for Jones to withdraw from the race.

Keep ReadingShow less