Skip to content
Search

Latest Stories

Top Stories

True Confessions of an AI Flip Flopper

True Confessions of an AI Flip Flopper
Ai technology, Artificial Intelligence. man using technology smart robot AI, artificial intelligence by enter command prompt for generates something, Futuristic technology transformation.
Getty Images - stock photo

A few years ago, I would have agreed with the argument that the most important AI regulatory issue is mitigating the low probability of catastrophic risks. Today, I’d think nearly the opposite. My primary concern is that we will fail to realize the already feasible and significant benefits of AI. What changed and why do I think my own evolution matters?

Discussion of my personal path from a more “safety” oriented perspective to one that some would label as an “accelerationist” view isn’t important because I, Kevin Frazier, have altered my views. The point of walking through my pivot is instead valuable because it may help those unsure of how to think about these critical issues navigate a complex and, increasingly, heated debate. By sharing my own change in thought, I hope others will feel welcomed to do two things: first, reject unproductive, static labels that are misaligned with a dynamic technology; and, second, adjust their own views in light of the wide variety of shifting variables at play when it comes to AI regulation. More generally, I believe that calling myself out for a so-called “flip-flop” may give others more leeway to do so without feeling like they’ve committed some wrong.


This discussion also matters because everyone should have a viewpoint on AI policy. This is no longer an issue that we can leave to San Francisco house parties and whispered conversations in the quiet car of an Acela train. I know that folks are tired of all the ink spilled about AI, all the podcasts that frame new model releases as the end of the world or the beginning of a utopian future, and all the speculation about whether AI will take your job today or tomorrow. It’s exhausting and, in many cases, not productive. Yet, absent more general participation in these debates, only a handful of people will shape how AI is developed and adopted across the country. You may be tired of it but you cannot opt out of knowing about AI and having a reasoned stance on its regulation.

Congress is actively considering a ten-year moratorium on a wide range of state AI regulation. So the stakes are set for an ongoing conversation about the nation’s medium-term approach to AI. I have come out in support of a federal-first approach to AI governance, preventing states from adopting the sort of AI strict safety measures I may have endorsed a few years back. So what gives? Why have I flipped?

First, I’ve learned more about the positive use cases of AI. For unsurprising reasons, media outlets that profit from sensationalistic headlines tend to focus on reports of AI bias, discrimination, and hallucinations. These stories draw clicks and align well with social media-induced techlash that’s still a driving force in technology governance conversations. Through attending Meta’s Open Source AI Summit, however, I realized that AI is already being deployed in highly sensitive and highly consequential contexts and delivering meaningful results. I learned about neurosurgeons leveraging AI tools to restore a paralyzed woman’s voice, material science researchers being able to make certain predictions 10,000 times faster thanks to AI, and conservation groups leaning on AI to improve deforestation tracking. If scaled, these sorts of use cases could positively transform society.

Second, I’ve thoroughly engaged with leading research on the importance of technological diffusion to national security and economic prosperity. In short, as outlined by Jeffrey Ding, and others, the country that dominates a certain technological era is not the one that innovates first but rather the one that spreads the technology across society first. The latter country is better able to economically, politically, and culturally adjust to the chaos introduced by massive jumps in technology. Those who insist on a negative framing of AI threaten to undermine AI adoption by the American public.

Third, I’ve spent some time questioning the historical role of lawyers in stifling progress. As noted by Ezra Klein, Derek Thompson, and others across the ideological spectrum who have embraced some version of the Abundance agenda, lawyers erected much of the bureaucratic barriers that have prevented us from building housing, completing public transit projects, and otherwise responding to public concerns in the 21st Century. Many of the safety-focused policy proposals being evaluated at the state and federal levels threaten to do the same with respect to AI—these lawyer-subsidization bills set vague “reasonableness” standards, mandate annual audits, and, more generally, increase the need for lawyers to litigate and adjudicate whether a certain model adheres to each state’s interpretation of “responsible” AI development.

Adherents to that safety perspective will rightly point out that I'm downplaying legitimate concerns about extreme AI risks. They might remind me that though they too acknowledge catastrophic scenarios have low probabilities, they nevertheless warrant substantial regulatory intervention because of the magnitude of the potential harm. This is the classic precautionary principle argument: when the potential downside is civilization-ending, shouldn't we err on the side of caution?

I continue to acknowledge this concern but believe it misunderstands both the nature of risk and the trade-offs we face. The “low probability, high impact” framing obscures the fact that many proposed AI safety regulations would impose certain, immediate costs on society while addressing speculative future harms. We're not comparing a small chance of catastrophe against no cost—we're comparing it against the guaranteed opportunity costs of delayed medical breakthroughs, slowed scientific research, and reduced economic productivity. When a child dies from a disease that could have been cured with AI-accelerated drug discovery, that’s not a hypothetical cost. It's a real consequence of regulatory delay.

My evolution reflects not an abandonment of caution but a more holistic understanding of where the real risks lie. The greatest threat isn't that AI will develop too quickly but that beneficial AI will develop too slowly—or in the wrong places, under the wrong governance structures.


Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

Biased Coverage Distorts the Historical Record We Later Inherit
white printer paper on black table
Photo by Ashni on Unsplash

Biased Coverage Distorts the Historical Record We Later Inherit

I used to enjoy doing my schoolwork in my college newspaper’s office. There is a series of tall library shelves filled with dusty books held together by loose binding that contain every article printed since our inception in the 1930s.

The book covers have lost the sharpness of their hues over time, and the thin old papers inside are yellow and torn, but inside those books lie almost 100 years of articles that tell the stories and history of the college town, Isla Vista, and UC Santa Barbara, as written by student journalists at the Daily Nexus.

Keep ReadingShow less
Media criticism
News media's vital to democracy, Americans say; then a partisan divide yawns
Tero Vesalainen/Getty Images

Public Media Under Fire: Why Project 2025 Is Reshaping NPR and PBS

This past spring and summer, The Fulcrum published a 30-part, nonpartisan series examining Project 2025—a sweeping policy blueprint for a potential second Trump administration. Our analysis explored the proposed reforms and their far-reaching implications across government. Now, as the 2025 administration begins to take shape, it’s time to move from speculation to reality.

In this follow-up, we turn our focus to one of the most consequential—and quietly unfolding—chapters of that blueprint: Funding cuts from NPR and PBS.

Keep ReadingShow less
Medical Schools Are Falling Behind in the Age of Generative AI

"To prepare tomorrow’s doctors, medical school deans, elected officials, and health care regulators must invest in training that matches the pace and promise of this technology," writes Dr. Robert Pearl.

Getty Images, ArtistGNDphotography

Medical Schools Are Falling Behind in the Age of Generative AI

While colleges across the nation are adapting their curricula to harness the power of generative AI, U.S. medical schools remain dangerously behind.

Most students entering medicine today will graduate without ever being trained to use GenAI tools effectively. That must change. To prepare tomorrow’s doctors – and protect tomorrow’s patients – medical school deans, elected officials, and health care regulators must invest in training that matches the pace and promise of this technology.

Keep ReadingShow less
Bay Area Social Media Post Claims ICE Cannot Enter Library, Fuels Misinformation

South Novato Library, California

Pricila Flores

Bay Area Social Media Post Claims ICE Cannot Enter Library, Fuels Misinformation

Bay Area community advocates are cautioning community members to be wary of what they see, interact with, and post on social media regarding information about the United States Immigration and Customs Enforcement (ICE) and immigration, following a rumor that targeted the Marin County Library.

‘South Novato Library has safe rooms that cannot be accessed by border patrol or ICE without a court order,’ an Instagram story post reads, with photos of a room in the library next to the text alongside the library address. The graphic claims Immigration and Customs Enforcement would not have the right to enter the pictured room without a court-ordered warrant.

Despite the graphic becoming a popular share among the local community of Novato, a Marin County city located just north of San Francisco, the information is false.

Keep ReadingShow less