Skip to content
Search

Latest Stories

Top Stories

AI Is Here. Our Laws Are Stuck in the Past.

Closeup of Software engineering team engaged in problem-solving and code analysis

Closeup of Software engineering team engaged in problem-solving and code analysis.

Getty Images, MTStock Studio

Artificial intelligence (AI) promises a future once confined to science fiction: personalized medicine accounting for your specific condition, accelerated scientific discovery addressing the most difficult challenges, and reimagined public education designed around AI tutors suited to each student's learning style. We see glimpses of this potential on a daily basis. Yet, as AI capabilities surge forward at exponential speed, the laws and regulations meant to guide them remain anchored in the twentieth century (if not the nineteenth or eighteenth!). This isn't just inefficient; it's dangerously reckless.

For too long, our approach to governing new technologies, including AI, has been one of cautious incrementalism—trying to fit revolutionary tools into outdated frameworks. We debate how century-old privacy torts apply to vast AI training datasets, how liability rules designed for factory machines might cover autonomous systems, or how copyright law conceived for human authors handles AI-generated creations. We tinker around the edges, applying digital patches to analog laws.


This constant patching creates what we might call "legal tech debt." Imagine trying to run sophisticated AI software on a computer from the 1980s—it might technically boot up, but it will be slow, prone to crashing, and incapable of performing its intended function. Similarly, forcing AI into legal structures designed for a different technological era means we stifle its potential benefits while failing to adequately manage its risks. Outdated privacy rules hinder the development of AI for public good projects; ambiguous liability standards chill innovation in critical sectors; fragmented regulations create uncertainty and inefficiency.

Sign up for The Fulcrum newsletter

Allowing this legal tech debt to accumulate isn't just about missed opportunities; It breeds public distrust when laws seem irrelevant to lived reality. It invites policy chaos, as seen with the frantic, often ineffective, attempts to regulate social media after years of neglect. It risks a future where transformative technology evolves haphazardly, governed by stopgap measures and reactive panic rather than thoughtful design. With AI, the stakes are simply too high for such recklessness.

We need a fundamentally different approach. Instead of incremental tinkering, we need bold, systemic change. We need to be willing to leapfrog—to bypass outdated frameworks and design legal and regulatory systems specifically for the age of AI.

What does this leapfrog approach look like? It requires three key shifts in thinking:

First, we must look ahead. Policymakers and experts need to engage seriously with plausible future scenarios for AI development, learning from the forecasting methods used by technologists. This isn’t about predicting the future with certainty but about understanding the range of possibilities—from accelerating breakthroughs to unexpected plateaus—and anticipating the legal pressures and opportunities each might create. We need to proactively identify which parts of our legal infrastructure are most likely to buckle under the strain of advanced AI.

Second, we must embrace fundamental redesign. Armed with foresight, we must be willing to propose and implement wholesale reforms, not just minor rule changes. If AI requires vast datasets for public benefit, perhaps we need entirely new data governance structures—like secure, publicly accountable data trusts or commons—rather than just carving out exceptions to FERPA or HIPAA. If AI can personalize education, perhaps we need to rethink rigid grade-based structures and accreditation standards, not just approve AI tutors within the old system. This requires political courage and a willingness to question long-held assumptions about how legal systems should operate.

Third, we must build in adaptability. Given the inherent uncertainty of AI’s trajectory, any new legal framework must be dynamic, not static. We need laws designed to evolve. This means incorporating mechanisms like mandatory periodic reviews tied to real-world outcomes, sunset clauses that force reconsideration of rules, specialized bodies empowered to update technical standards quickly, and even using AI itself to help monitor the effectiveness and impacts of regulations in real-time. We need systems that learn and adapt, preventing the accumulation of new tech debt.

Making this shift won't be easy. It demands a new level of ambition from our policymakers, a greater willingness among legal experts to think beyond established doctrines, and broader public engagement on the fundamental choices AI presents. But the alternative—continuing to muddle through with incremental fixes—is far riskier. It’s a path toward unrealized potential, unmanaged risks, and a future where technology outpaces our ability to govern it wisely.

AI offers incredible possibilities but realizing them requires more than just brilliant code. It requires an equally ambitious upgrade to our legal and regulatory operating system. It’s time to stop patching the past and start designing the future. It’s time to leapfrog.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

Nurturing the Next Generation of Journalists
man using MacBook Air

Nurturing the Next Generation of Journalists

“Student journalists are uniquely positioned to take on the challenges of complicating the narrative about how we see each other, putting forward new solutions to how we can work together and have dialogue across difference,” said Maxine Rich, the Program Manager with Common Ground USA. I had the chance to interview her earlier this year about Common Ground Journalism, a new initiative to support students reporting in contentious times.

A partnership with The Fulcrum and the Latino News Network (LNN), I joined Maxine and Nicole Donelan, Production Assistant with Common Ground USA, as co-instructor of the first Common Ground Journalism cohort, which ran for six weeks between January and March 2025.

Keep ReadingShow less
Project 2025’s Media Agenda: The Executive Order Threatens NPR and PBS
NPR headquarters | James Cridland | Flickr

Project 2025’s Media Agenda: The Executive Order Threatens NPR and PBS

President Donald Trump signed an executive order late Thursday evening to eliminate federal funding for NPR and PBS. The order directs the Corporation for Public Broadcasting (CPB) and other agencies to cease both direct and indirect public financing for these public broadcasters.

In a social media post, the administration defended the decision, asserting that NPR and PBS "receive millions from taxpayers to spread radical, woke propaganda disguised as 'news.’" The executive order argues that government-funded media is outdated and unnecessary, claiming it compromises journalistic independence.

Keep ReadingShow less
Remote control in hand to change channels​.

Remote control in hand to change channels.

Getty Images, Stefano Madrigali

Late-Night Comedy: How Satire Became America’s Most Trusted News Source

A close friend of mine recently confessed to having stopped watching cable news altogether because it was causing him and his wife anxiety and dread. They began watching Jimmy Kimmel instead, saying the nightly news felt like "psychological warfare" on their mental state. "We want to know what's going on but can't handle the relentless doom and gloom every night," he told me.

Jimmy Kimmel, host of ABC's Jimmy Kimmel Live, seems to understand this shift. "A year ago, I would've said I'm hoping to show people who aren't paying attention to the news what's actually going on," he told Rolling Stone last month in an interview. "Now I see myself more as a place to scream."

Keep ReadingShow less
The Biggest Obstacle to Safer Roads Isn't Technology, It's Politics

A 3D generated image of modern vehicles with AI assistance.

Getty Images, gremlin

The Biggest Obstacle to Safer Roads Isn't Technology, It's Politics

Let’s be honest: does driving feel safe anymore? Ask anyone navigating the daily commute, especially in notoriously chaotic places like Miami, and you’ll likely hear a frustrated, perhaps even expletive-laden, "No!" That gut feeling isn't paranoia; it's backed by grim statistics. Over 200 people died on Travis County roads in 2023, according to Vision Zero ATX. Nationally, tens of thousands perish in preventable crashes. It's a relentless public health crisis we've somehow numbed ourselves to, with a staggering cost measured in shattered families and lost potential.

But imagine a different reality, one where that daily fear evaporates. What if I told you that the technology to dramatically reduce this carnage isn't science fiction but sitting right under our noses? Autonomous vehicles (AVs), or self-driving cars, are here and rapidly improving. Leveraging breakthroughs in AI, these vehicles are increasingly outperforming human drivers, proving to be significantly less likely to cause accidents, especially those resulting in injury. Studies suggest that replacing human drivers with AVs could drastically cut road fatalities. Even achieving just 10% AV penetration on our roads might improve traffic safety by as much as 50%, with those gains likely to grow exponentially as the technology becomes more sophisticated and widespread.

Keep ReadingShow less