Skip to content
Search

Latest Stories

Top Stories

AI Is Here. Our Laws Are Stuck in the Past.

Closeup of Software engineering team engaged in problem-solving and code analysis

Closeup of Software engineering team engaged in problem-solving and code analysis.

Getty Images, MTStock Studio

Artificial intelligence (AI) promises a future once confined to science fiction: personalized medicine accounting for your specific condition, accelerated scientific discovery addressing the most difficult challenges, and reimagined public education designed around AI tutors suited to each student's learning style. We see glimpses of this potential on a daily basis. Yet, as AI capabilities surge forward at exponential speed, the laws and regulations meant to guide them remain anchored in the twentieth century (if not the nineteenth or eighteenth!). This isn't just inefficient; it's dangerously reckless.

For too long, our approach to governing new technologies, including AI, has been one of cautious incrementalism—trying to fit revolutionary tools into outdated frameworks. We debate how century-old privacy torts apply to vast AI training datasets, how liability rules designed for factory machines might cover autonomous systems, or how copyright law conceived for human authors handles AI-generated creations. We tinker around the edges, applying digital patches to analog laws.


This constant patching creates what we might call "legal tech debt." Imagine trying to run sophisticated AI software on a computer from the 1980s—it might technically boot up, but it will be slow, prone to crashing, and incapable of performing its intended function. Similarly, forcing AI into legal structures designed for a different technological era means we stifle its potential benefits while failing to adequately manage its risks. Outdated privacy rules hinder the development of AI for public good projects; ambiguous liability standards chill innovation in critical sectors; fragmented regulations create uncertainty and inefficiency.

Allowing this legal tech debt to accumulate isn't just about missed opportunities; It breeds public distrust when laws seem irrelevant to lived reality. It invites policy chaos, as seen with the frantic, often ineffective, attempts to regulate social media after years of neglect. It risks a future where transformative technology evolves haphazardly, governed by stopgap measures and reactive panic rather than thoughtful design. With AI, the stakes are simply too high for such recklessness.

We need a fundamentally different approach. Instead of incremental tinkering, we need bold, systemic change. We need to be willing to leapfrog—to bypass outdated frameworks and design legal and regulatory systems specifically for the age of AI.

What does this leapfrog approach look like? It requires three key shifts in thinking:

First, we must look ahead. Policymakers and experts need to engage seriously with plausible future scenarios for AI development, learning from the forecasting methods used by technologists. This isn’t about predicting the future with certainty but about understanding the range of possibilities—from accelerating breakthroughs to unexpected plateaus—and anticipating the legal pressures and opportunities each might create. We need to proactively identify which parts of our legal infrastructure are most likely to buckle under the strain of advanced AI.

Second, we must embrace fundamental redesign. Armed with foresight, we must be willing to propose and implement wholesale reforms, not just minor rule changes. If AI requires vast datasets for public benefit, perhaps we need entirely new data governance structures—like secure, publicly accountable data trusts or commons—rather than just carving out exceptions to FERPA or HIPAA. If AI can personalize education, perhaps we need to rethink rigid grade-based structures and accreditation standards, not just approve AI tutors within the old system. This requires political courage and a willingness to question long-held assumptions about how legal systems should operate.

Third, we must build in adaptability. Given the inherent uncertainty of AI’s trajectory, any new legal framework must be dynamic, not static. We need laws designed to evolve. This means incorporating mechanisms like mandatory periodic reviews tied to real-world outcomes, sunset clauses that force reconsideration of rules, specialized bodies empowered to update technical standards quickly, and even using AI itself to help monitor the effectiveness and impacts of regulations in real-time. We need systems that learn and adapt, preventing the accumulation of new tech debt.

Making this shift won't be easy. It demands a new level of ambition from our policymakers, a greater willingness among legal experts to think beyond established doctrines, and broader public engagement on the fundamental choices AI presents. But the alternative—continuing to muddle through with incremental fixes—is far riskier. It’s a path toward unrealized potential, unmanaged risks, and a future where technology outpaces our ability to govern it wisely.

AI offers incredible possibilities but realizing them requires more than just brilliant code. It requires an equally ambitious upgrade to our legal and regulatory operating system. It’s time to stop patching the past and start designing the future. It’s time to leapfrog.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

Just the Facts: Digital Services Tax
people sitting down near table with assorted laptop computers
Photo by Marvin Meyer on Unsplash

Just the Facts: Digital Services Tax

President Donald Trump said on Friday that he has ended trade talks with Canada and will soon announce a new tariff rate for that country, as stated in a Truth Social post.

The decision to end the months-long negotiations came after Canada announced a digital service tax (DST) that Trump called “a direct and blatant attack on our Country.”

Keep ReadingShow less
Entertainment Can Improve How Democrats and Republicans See Each Other

Since the development of American mass media culture in the mid-20th century, numerous examples of entertainment media have tried to improve attitudes towards those who have traditionally held little power.

Getty Images, skynesher

Entertainment Can Improve How Democrats and Republicans See Each Other

Entertainment has been used for decades to improve attitudes toward other groups, both in the U.S. and abroad. One can think of movies like Guess Who's Coming to Dinner, helping change attitudes toward Black Americans, or TV shows like Rosanne, helping humanize the White working class. Efforts internationally show that media can sometimes improve attitudes toward two groups concurrently.

Substantial research shows that Americans now hold overly negative views of those across the political spectrum. Let's now learn from decades of experience using entertainment to improve attitudes of those in other groups—but also from counter-examples that have reinforced stereotypes and whose techniques should generally be avoided—in order to improve attitudes toward fellow Americans across politics. This entertainment can allow Americans across the political spectrum to have more accurate views of each other while realizing that successful cross-ideological friendships and collaborations are possible.

Keep ReadingShow less
Congress Must Not Undermine State Efforts To Regulate AI Harms to Children
Congress Must Not Undermine State Efforts To Regulate AI Harms to Children
Getty Images, Dmytro Betsenko

Congress Must Not Undermine State Efforts To Regulate AI Harms to Children

A cornerstone of conservative philosophy is that policy decisions should generally be left to the states. Apparently, this does not apply when the topic is artificial intelligence (AI).

In the name of promoting innovation, and at the urging of the tech industry, Congress quietly included in a 1,000-page bill a single sentence that has the power to undermine efforts to protect against the dangers of unfettered AI development. The sentence imposes a ten-year ban on state regulation of AI, including prohibiting the enforcement of laws already on the books. This brazen approach crossed the line even for conservative U.S. Representative Marjorie Taylor Greene, who remarked, “We have no idea what AI will be capable of in the next 10 years, and giving it free rein and tying states' hands is potentially dangerous.” She’s right. And it is especially dangerous for children.

Keep ReadingShow less
Microphones, podcast set up, podcast studio.

Many people inside and outside of the podcasting world are working to use the medium as a way to promote democracy and civic engagement.

Getty Images, Sergey Mironov

Ben Rhodes on How Podcasts Can Strengthen Democracy

After the 2024 election was deemed the “podcast election,” many people inside and outside of the podcasting world were left wondering how to capitalize on the medium as a way to promote democracy and civic engagement to audiences who are either burned out by or distrustful of traditional or mainstream news sources.

The Democracy Group podcast network has been working through this question since its founding in 2020—long before presidential candidates appeared on some of the most popular podcasts to appeal to specific demographics. Our members recently met in Washington, D.C., for our first convening to learn from each other and from high-profile podcasters like Jessica Tarlov, host of Raging Moderates, and Ben Rhodes, host of Pod Save the World.

Keep ReadingShow less