Skip to content
Search

Latest Stories

Follow Us:
Top Stories

AI Is Here. Our Laws Are Stuck in the Past.

Opinion

Closeup of Software engineering team engaged in problem-solving and code analysis

Closeup of Software engineering team engaged in problem-solving and code analysis.

Getty Images, MTStock Studio

Artificial intelligence (AI) promises a future once confined to science fiction: personalized medicine accounting for your specific condition, accelerated scientific discovery addressing the most difficult challenges, and reimagined public education designed around AI tutors suited to each student's learning style. We see glimpses of this potential on a daily basis. Yet, as AI capabilities surge forward at exponential speed, the laws and regulations meant to guide them remain anchored in the twentieth century (if not the nineteenth or eighteenth!). This isn't just inefficient; it's dangerously reckless.

For too long, our approach to governing new technologies, including AI, has been one of cautious incrementalism—trying to fit revolutionary tools into outdated frameworks. We debate how century-old privacy torts apply to vast AI training datasets, how liability rules designed for factory machines might cover autonomous systems, or how copyright law conceived for human authors handles AI-generated creations. We tinker around the edges, applying digital patches to analog laws.


This constant patching creates what we might call "legal tech debt." Imagine trying to run sophisticated AI software on a computer from the 1980s—it might technically boot up, but it will be slow, prone to crashing, and incapable of performing its intended function. Similarly, forcing AI into legal structures designed for a different technological era means we stifle its potential benefits while failing to adequately manage its risks. Outdated privacy rules hinder the development of AI for public good projects; ambiguous liability standards chill innovation in critical sectors; fragmented regulations create uncertainty and inefficiency.

Allowing this legal tech debt to accumulate isn't just about missed opportunities; It breeds public distrust when laws seem irrelevant to lived reality. It invites policy chaos, as seen with the frantic, often ineffective, attempts to regulate social media after years of neglect. It risks a future where transformative technology evolves haphazardly, governed by stopgap measures and reactive panic rather than thoughtful design. With AI, the stakes are simply too high for such recklessness.

We need a fundamentally different approach. Instead of incremental tinkering, we need bold, systemic change. We need to be willing to leapfrog—to bypass outdated frameworks and design legal and regulatory systems specifically for the age of AI.

What does this leapfrog approach look like? It requires three key shifts in thinking:

First, we must look ahead. Policymakers and experts need to engage seriously with plausible future scenarios for AI development, learning from the forecasting methods used by technologists. This isn’t about predicting the future with certainty but about understanding the range of possibilities—from accelerating breakthroughs to unexpected plateaus—and anticipating the legal pressures and opportunities each might create. We need to proactively identify which parts of our legal infrastructure are most likely to buckle under the strain of advanced AI.

Second, we must embrace fundamental redesign. Armed with foresight, we must be willing to propose and implement wholesale reforms, not just minor rule changes. If AI requires vast datasets for public benefit, perhaps we need entirely new data governance structures—like secure, publicly accountable data trusts or commons—rather than just carving out exceptions to FERPA or HIPAA. If AI can personalize education, perhaps we need to rethink rigid grade-based structures and accreditation standards, not just approve AI tutors within the old system. This requires political courage and a willingness to question long-held assumptions about how legal systems should operate.

Third, we must build in adaptability. Given the inherent uncertainty of AI’s trajectory, any new legal framework must be dynamic, not static. We need laws designed to evolve. This means incorporating mechanisms like mandatory periodic reviews tied to real-world outcomes, sunset clauses that force reconsideration of rules, specialized bodies empowered to update technical standards quickly, and even using AI itself to help monitor the effectiveness and impacts of regulations in real-time. We need systems that learn and adapt, preventing the accumulation of new tech debt.

Making this shift won't be easy. It demands a new level of ambition from our policymakers, a greater willingness among legal experts to think beyond established doctrines, and broader public engagement on the fundamental choices AI presents. But the alternative—continuing to muddle through with incremental fixes—is far riskier. It’s a path toward unrealized potential, unmanaged risks, and a future where technology outpaces our ability to govern it wisely.

AI offers incredible possibilities but realizing them requires more than just brilliant code. It requires an equally ambitious upgrade to our legal and regulatory operating system. It’s time to stop patching the past and start designing the future. It’s time to leapfrog.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.


Read More

Meta Undermining Trust but Verify through Paid Links
Facebook launches voting resource tool
Facebook launches voting resource tool

Meta Undermining Trust but Verify through Paid Links

Facebook is testing limits on shared external links, which would become a paid feature through their Meta Verified program, which costs $14.99 per month.

This change solidifies that verification badges are now meaningless signifiers. Yet it wasn’t always so; the verified internet was built to support participation and trust. Beginning with Twitter’s verification program launched in 2009, a checkmark next to a username indicated that an account had been verified to represent a notable person or official account for a business. We could believe that an elected official or a brand name was who they said they were online. When Twitter Blue, and later X Premium, began to support paid blue checkmarks in November of 2022, the visual identification of verification became deceptive. Think Fake Eli Lilly accounts posting about free insulin and impersonation accounts for Elon Musk himself.

This week’s move by Meta echoes changes at Twitter/X, despite the significant evidence that it leaves information quality and user experience in a worse place than before. Despite what Facebook says, all this tells anyone is that you paid.

Keep ReadingShow less
artificial intelligence

Rather than blame AI for young Americans struggling to find work, we need to build: build new educational institutions, new retraining and upskilling programs, and, most importantly, new firms.

Surasak Suwanmake/Getty Images

Blame AI or Build With AI? Only One Approach Creates Jobs

We’re failing young Americans. Many of them are struggling to find work. Unemployment among 16- to 24-year-olds topped 10.5% in August. Even among those who do find a job, many of them are settling for lower-paying roles. More than 50% of college grads are underemployed. To make matters worse, the path forward to a more stable, lucrative career is seemingly up in the air. High school grads in their twenties find jobs at nearly the same rate as those with four-year degrees.

We have two options: blame or build. The first involves blaming AI, as if this new technology is entirely to blame for the current economic malaise facing Gen Z. This course of action involves slowing or even stopping AI adoption. For example, there’s so-called robot taxes. The thinking goes that by placing financial penalties on firms that lean into AI, there will be more roles left to Gen Z and workers in general. Then there’s the idea of banning or limiting the use of AI in hiring and firing decisions. Applicants who have struggled to find work suggest that increased use of AI may be partially at fault. Others have called for providing workers with a greater say in whether and to what extent their firm uses AI. This may help firms find ways to integrate AI in a way that augments workers rather than replace them.

Keep ReadingShow less
Parv Mehta Is Leading the Fight Against AI Misinformation

A visual representation of deep fake and disinformation concepts, featuring various related keywords in green on a dark background, symbolizing the spread of false information and the impact of artificial intelligence.

Getty Images

Parv Mehta Is Leading the Fight Against AI Misinformation

At a moment when the country is grappling with the civic consequences of rapidly advancing technology, Parv Mehta stands out as one of the most forward‑thinking young leaders of his generation. Recognized as one of the 500 Gen Zers named to the 2025 Carnegie Young Leaders for Civic Preparedness cohort, Mehta represents the kind of grounded, community‑rooted innovator the program was designed to elevate.

A high school student from Washington state, Parv has emerged as a leading youth voice on the dangers of artificial intelligence and deepfakes. He recognized early that his generation would inherit a world where misinformation spreads faster than truth—and where young people are often the most vulnerable targets. Motivated by years of computer science classes and a growing awareness of AI’s risks, he launched a project to educate students across Washington about deepfake technology, media literacy, and digital safety.

Keep ReadingShow less
child holding smartphone

As Australia bans social media for kids under 16, U.S. parents face a harder truth: online safety isn’t an individual choice; it’s a collective responsibility.

Getty Images/Keiko Iwabuchi

Parents Must Quit Infighting to Keep Kids Safe Online

Last week, Australia’s social media ban for children under age 16 officially took effect. It remains to be seen how this law will shape families' behavior; however, it’s at least a stand against the tech takeover of childhood. Here in the U.S., however, we're in a different boat — a consensus on what's best for kids feels much harder to come by among both lawmakers and parents.

In order to make true progress on this issue, we must resist the fallacy of parental individualism – that what you choose for your own child is up to you alone. That it’s a personal, or family, decision to allow smartphones, or certain apps, or social media. But it’s not a personal decision. The choice you make for your family and your kids affects them and their friends, their friends' siblings, their classmates, and so on. If there is no general consensus around parenting decisions when it comes to tech, all kids are affected.

Keep ReadingShow less