A few years ago, I would have agreed with the argument that the most important AI regulatory issue is mitigating the low probability of catastrophic risks. Today, I’d think nearly the opposite. My primary concern is that we will fail to realize the already feasible and significant benefits of AI. What changed and why do I think my own evolution matters?
Discussion of my personal path from a more “safety” oriented perspective to one that some would label as an “accelerationist” view isn’t important because I, Kevin Frazier, have altered my views. The point of walking through my pivot is instead valuable because it may help those unsure of how to think about these critical issues navigate a complex and, increasingly, heated debate. By sharing my own change in thought, I hope others will feel welcomed to do two things: first, reject unproductive, static labels that are misaligned with a dynamic technology; and, second, adjust their own views in light of the wide variety of shifting variables at play when it comes to AI regulation. More generally, I believe that calling myself out for a so-called “flip-flop” may give others more leeway to do so without feeling like they’ve committed some wrong.
This discussion also matters because everyone should have a viewpoint on AI policy. This is no longer an issue that we can leave to San Francisco house parties and whispered conversations in the quiet car of an Acela train. I know that folks are tired of all the ink spilled about AI, all the podcasts that frame new model releases as the end of the world or the beginning of a utopian future, and all the speculation about whether AI will take your job today or tomorrow. It’s exhausting and, in many cases, not productive. Yet, absent more general participation in these debates, only a handful of people will shape how AI is developed and adopted across the country. You may be tired of it but you cannot opt out of knowing about AI and having a reasoned stance on its regulation.
Congress is actively considering a ten-year moratorium on a wide range of state AI regulation. So the stakes are set for an ongoing conversation about the nation’s medium-term approach to AI. I have come out in support of a federal-first approach to AI governance, preventing states from adopting the sort of AI strict safety measures I may have endorsed a few years back. So what gives? Why have I flipped?
First, I’ve learned more about the positive use cases of AI. For unsurprising reasons, media outlets that profit from sensationalistic headlines tend to focus on reports of AI bias, discrimination, and hallucinations. These stories draw clicks and align well with social media-induced techlash that’s still a driving force in technology governance conversations. Through attending Meta’s Open Source AI Summit, however, I realized that AI is already being deployed in highly sensitive and highly consequential contexts and delivering meaningful results. I learned about neurosurgeons leveraging AI tools to restore a paralyzed woman’s voice, material science researchers being able to make certain predictions 10,000 times faster thanks to AI, and conservation groups leaning on AI to improve deforestation tracking. If scaled, these sorts of use cases could positively transform society.
Second, I’ve thoroughly engaged with leading research on the importance of technological diffusion to national security and economic prosperity. In short, as outlined by Jeffrey Ding, and others, the country that dominates a certain technological era is not the one that innovates first but rather the one that spreads the technology across society first. The latter country is better able to economically, politically, and culturally adjust to the chaos introduced by massive jumps in technology. Those who insist on a negative framing of AI threaten to undermine AI adoption by the American public.
Third, I’ve spent some time questioning the historical role of lawyers in stifling progress. As noted by Ezra Klein, Derek Thompson, and others across the ideological spectrum who have embraced some version of the Abundance agenda, lawyers erected much of the bureaucratic barriers that have prevented us from building housing, completing public transit projects, and otherwise responding to public concerns in the 21st Century. Many of the safety-focused policy proposals being evaluated at the state and federal levels threaten to do the same with respect to AI—these lawyer-subsidization bills set vague “reasonableness” standards, mandate annual audits, and, more generally, increase the need for lawyers to litigate and adjudicate whether a certain model adheres to each state’s interpretation of “responsible” AI development.
Adherents to that safety perspective will rightly point out that I'm downplaying legitimate concerns about extreme AI risks. They might remind me that though they too acknowledge catastrophic scenarios have low probabilities, they nevertheless warrant substantial regulatory intervention because of the magnitude of the potential harm. This is the classic precautionary principle argument: when the potential downside is civilization-ending, shouldn't we err on the side of caution?
I continue to acknowledge this concern but believe it misunderstands both the nature of risk and the trade-offs we face. The “low probability, high impact” framing obscures the fact that many proposed AI safety regulations would impose certain, immediate costs on society while addressing speculative future harms. We're not comparing a small chance of catastrophe against no cost—we're comparing it against the guaranteed opportunity costs of delayed medical breakthroughs, slowed scientific research, and reduced economic productivity. When a child dies from a disease that could have been cured with AI-accelerated drug discovery, that’s not a hypothetical cost. It's a real consequence of regulatory delay.
My evolution reflects not an abandonment of caution but a more holistic understanding of where the real risks lie. The greatest threat isn't that AI will develop too quickly but that beneficial AI will develop too slowly—or in the wrong places, under the wrong governance structures.
Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.