Skip to content
Search

Latest Stories

Follow Us:
Top Stories

True Confessions of an AI Flip Flopper

Opinion

True Confessions of an AI Flip Flopper
Ai technology, Artificial Intelligence. man using technology smart robot AI, artificial intelligence by enter command prompt for generates something, Futuristic technology transformation.
Getty Images - stock photo

A few years ago, I would have agreed with the argument that the most important AI regulatory issue is mitigating the low probability of catastrophic risks. Today, I’d think nearly the opposite. My primary concern is that we will fail to realize the already feasible and significant benefits of AI. What changed and why do I think my own evolution matters?

Discussion of my personal path from a more “safety” oriented perspective to one that some would label as an “accelerationist” view isn’t important because I, Kevin Frazier, have altered my views. The point of walking through my pivot is instead valuable because it may help those unsure of how to think about these critical issues navigate a complex and, increasingly, heated debate. By sharing my own change in thought, I hope others will feel welcomed to do two things: first, reject unproductive, static labels that are misaligned with a dynamic technology; and, second, adjust their own views in light of the wide variety of shifting variables at play when it comes to AI regulation. More generally, I believe that calling myself out for a so-called “flip-flop” may give others more leeway to do so without feeling like they’ve committed some wrong.


This discussion also matters because everyone should have a viewpoint on AI policy. This is no longer an issue that we can leave to San Francisco house parties and whispered conversations in the quiet car of an Acela train. I know that folks are tired of all the ink spilled about AI, all the podcasts that frame new model releases as the end of the world or the beginning of a utopian future, and all the speculation about whether AI will take your job today or tomorrow. It’s exhausting and, in many cases, not productive. Yet, absent more general participation in these debates, only a handful of people will shape how AI is developed and adopted across the country. You may be tired of it but you cannot opt out of knowing about AI and having a reasoned stance on its regulation.

Congress is actively considering a ten-year moratorium on a wide range of state AI regulation. So the stakes are set for an ongoing conversation about the nation’s medium-term approach to AI. I have come out in support of a federal-first approach to AI governance, preventing states from adopting the sort of AI strict safety measures I may have endorsed a few years back. So what gives? Why have I flipped?

First, I’ve learned more about the positive use cases of AI. For unsurprising reasons, media outlets that profit from sensationalistic headlines tend to focus on reports of AI bias, discrimination, and hallucinations. These stories draw clicks and align well with social media-induced techlash that’s still a driving force in technology governance conversations. Through attending Meta’s Open Source AI Summit, however, I realized that AI is already being deployed in highly sensitive and highly consequential contexts and delivering meaningful results. I learned about neurosurgeons leveraging AI tools to restore a paralyzed woman’s voice, material science researchers being able to make certain predictions 10,000 times faster thanks to AI, and conservation groups leaning on AI to improve deforestation tracking. If scaled, these sorts of use cases could positively transform society.

Second, I’ve thoroughly engaged with leading research on the importance of technological diffusion to national security and economic prosperity. In short, as outlined by Jeffrey Ding, and others, the country that dominates a certain technological era is not the one that innovates first but rather the one that spreads the technology across society first. The latter country is better able to economically, politically, and culturally adjust to the chaos introduced by massive jumps in technology. Those who insist on a negative framing of AI threaten to undermine AI adoption by the American public.

Third, I’ve spent some time questioning the historical role of lawyers in stifling progress. As noted by Ezra Klein, Derek Thompson, and others across the ideological spectrum who have embraced some version of the Abundance agenda, lawyers erected much of the bureaucratic barriers that have prevented us from building housing, completing public transit projects, and otherwise responding to public concerns in the 21st Century. Many of the safety-focused policy proposals being evaluated at the state and federal levels threaten to do the same with respect to AI—these lawyer-subsidization bills set vague “reasonableness” standards, mandate annual audits, and, more generally, increase the need for lawyers to litigate and adjudicate whether a certain model adheres to each state’s interpretation of “responsible” AI development.

Adherents to that safety perspective will rightly point out that I'm downplaying legitimate concerns about extreme AI risks. They might remind me that though they too acknowledge catastrophic scenarios have low probabilities, they nevertheless warrant substantial regulatory intervention because of the magnitude of the potential harm. This is the classic precautionary principle argument: when the potential downside is civilization-ending, shouldn't we err on the side of caution?

I continue to acknowledge this concern but believe it misunderstands both the nature of risk and the trade-offs we face. The “low probability, high impact” framing obscures the fact that many proposed AI safety regulations would impose certain, immediate costs on society while addressing speculative future harms. We're not comparing a small chance of catastrophe against no cost—we're comparing it against the guaranteed opportunity costs of delayed medical breakthroughs, slowed scientific research, and reduced economic productivity. When a child dies from a disease that could have been cured with AI-accelerated drug discovery, that’s not a hypothetical cost. It's a real consequence of regulatory delay.

My evolution reflects not an abandonment of caution but a more holistic understanding of where the real risks lie. The greatest threat isn't that AI will develop too quickly but that beneficial AI will develop too slowly—or in the wrong places, under the wrong governance structures.


Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.


Read More

A U.S. flag flying before congress. Visual representation of technology, a glitch, artificial intelligence
As AI reshapes jobs and politics, America faces a choice: resist automation or embrace innovation. The path to prosperity lies in AI literacy and adaptability.
Getty Images, Douglas Rissing

Why Should I Be Worried About AI?

For many people, the current anxiety about artificial intelligence feels overblown. They say, “We’ve been here before.” Every generation has its technological scare story. In the early days of automation, factories threatened jobs. Television was supposed to rot our brains. The internet was going to end serious thinking. Kurt Vonnegut’s Player Piano, published in 1952, imagined a world run by machines and technocrats, leaving ordinary humans purposeless and sidelined. We survived all of that.

So when people today warn that AI is different — that it poses risks to democracy, work, truth, our ability to make informed and independent choices — it’s reasonable to ask: Why should I care?

Keep ReadingShow less
A person on their phone, using a type of artificial intelligence.

AI-generated “nudification” is no longer a distant threat—it’s harming students now. As deepfake pornography spreads in schools nationwide, educators are left to confront a growing crisis that outpaces laws, platforms, and parental awareness.

Getty Images, d3sign

How AI Deepfakes in Classrooms Expose a Crisis of Accountability and Civic Trust

While public outrage flares when AI tools like Elon Musk’s Grok generate sexualized images of adults on X—often without consent—schools have been dealing with this harm for years. For school-aged children, AI-generated “nudification” is not a future threat or an abstract tech concern; it is already shaping their daily lives.

Last month, that reality became impossible to ignore in Lafourche Parish, Louisiana. A father sued the school district after several middle school boys circulated AI-generated pornographic images of eight female classmates, including his 13-year-old daughter. When the girl confronted one of the boys and punched him on a school bus, she was expelled. The boy who helped create and spread the images faced no formal consequences.

Keep ReadingShow less
Democracies Don’t Collapse in Silence; They Collapse When Truth Is Distorted or Denied
a remote control sitting in front of a television
Photo by Pinho . on Unsplash

Democracies Don’t Collapse in Silence; They Collapse When Truth Is Distorted or Denied

Even with the full protection of the First Amendment, the free press in America is at risk. When a president works tirelessly to silence journalists, the question becomes unavoidable: What truth is he trying to keep the country from seeing? What is he covering up or trying to hide?

Democracies rarely fall in a single moment; they erode through a thousand small silences that go unchallenged. When citizens can no longer see or hear the truth — or when leaders manipulate what the public is allowed to know — the foundation of self‑government begins to crack long before the structure falls. When truth becomes negotiable, democracy becomes vulnerable — not because citizens stop caring, but because they stop receiving the information they need to act.

Keep ReadingShow less
A close up of a person's hands typing on a laptop.

As AI reshapes the labor market, workers must think like entrepreneurs. Explore skills gaps, apprenticeships, and policy reforms shaping the future of work.

Getty Images, Maria Korneeva

We’re All Entrepreneurs Now: Learning, Pivoting, and Thriving the Age of AI

What do a recent grad, a disenchanted employee, and a parent returning to the workforce all have in common? They’re each trying to determine which skills are in demand and how they can convince employers that they are competent in those fields. This is easier said than done.

Recent grads point to transcripts lined with As to persuade firms that they can add value. Firms, well aware of grade inflation, may scoff.

Keep ReadingShow less