Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The Future We’ll Miss: Political Inaction Holds Back AI's Benefits

Opinion

Two people looking at computer screens with data.

A call to rethink AI governance argues that the real danger isn’t what AI might do—but what we’ll fail to do with it. Meet TFWM: The Future We’ll Miss.

Getty Images, Cravetiger

We’re all familiar with the motivating cry of “YOLO” right before you do something on the edge of stupidity and exhilaration.

We’ve all seen the “TL;DR” section that shares the key takeaways from a long article.


And, we’ve all experienced “FOMO” when our friends make plans and we feel compelled to tag along just to make sure we’re not left on the sidelines of an epic experience.

Let’s give a name to our age’s most haunting anxiety: TFWM—The Future We’ll Miss. It’s the recognition that future generations may ask why, when faced with tools to cure, create, and connect, we chose to maintain the status quo. Let’s run through a few examples to make this a little clearer:

  • AI can detect breast cancer earlier than humans and save millions in treatments and perhaps even thousands of lives. Yet, AI use in medical contexts is often tied up in red tape. #TFWM
  • New understanding of the interior design of cells via AI tools has the potential to increase drug development. AI researchers are still struggling to find the computing necessary to run their experiments. #TFWM
  • Weather forecasts empowered by AI may soon allow us to detect storms ten days earlier. A shortage of access to quality data may delay improvements and adoption of these tools. #TFWM
  • Firefighters have turned to VR exercises to gain valuable experience fighting fires in novel, extreme context. It’s the sort of practice that can make a big difference when the next spark appears. Limited AI readiness among local and state governments, however, stands in the way. #TFWM

I could go on (and I will in future posts). The point is that in several domains, we’re making the affirmative choice to extend the status quo despite viable alternatives to further human flourishing. Barriers to spreading these AI tools across jurisdictions are eminently solvable. Whether it’s budgetary constraints, regulatory hurdles, or public skepticism, all of these hindrances can be removed with enough political will.

So, why am I trying to make #TFWM a “thing"? In other words, why is it important to increase awareness of this perspective? The AI debate is being framed by questions that have distracted us from the practical policy challenges we need to address to bring about a better future.

The first set of distracting questions is some variant of: "Will AI become a sentient overlord and end humanity?" This is a debate about a speculative, distant future that conveniently distracts us from the very real, immediate lives we could be saving today.

The second set of questions is along the lines of “How many jobs will AI destroy?” This is a valid, but defensive and incomplete, question. It frames innovation as a zero-sum threat rather than asking the more productive question: “How can we deploy these tools to make our work more meaningful, creative, and valuable?”

Finally, there’s a tranche of questions related to some of the technical aspects of AI, like “Can we even trust what it says?” This concern over AI "hallucinations," while a real technical challenge, is often used to dismiss the technology's proven, superhuman accuracy in specific, life-saving domains, such as in medical settings.

A common thread ties these inquiries together. These questions are passive. They ask, “What will AI do to us?”

TFWM flips the script. It demands we ask the active and urgent question: “What will we fail to do with AI?”

The real risk isn't just that AI might go wrong. The real, measurable risk is that we won't let it go right. The tragedy is not a robot uprising that makes for good sci-fi but bad public policy; it's the preventable cancer, the missed storm warning, the failed drug trial. The problem isn't the technology; it's our failure of political will and, more pointedly, our failure of legal and regulatory imagination.

This brings us to why TFWM needs to be a "thing."

FOMO, for all its triviality, is a powerful motivator. It’s a personal anxiety that causes action. It gets you off the couch, into the Lyft, and into the party.

TFWM must become our new civic anxiety. It’s not the fear of missing a party; it's the fear of being judged by posterity. It is the deep, haunting dread that our grandchildren will look back at this moment of historic opportunity and ask us, “You had the tools to solve this. Why didn't you?”

This perspective creates the political will we desperately need. It reframes our entire approach to governance. It shifts the burden of proof from innovators to the status quo. The question is no longer, "Can you prove this new tool is 100% perfect and carries zero risk?" The question becomes, "Can you prove that our current system—with all its human error, bias, cost, and delay—is better than the alternative?"

YOLO, FOMO, and TL;DR are shorthand for navigating our personal lives. TFWM is the shorthand for our collective responsibility. The status quo is not a safe, neutral position. It is an active choice, and it has a body count. The future we'll miss isn't inevitable. It's a decision. And right now, we are deciding to miss it every single day we fail to act.


Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and author of the Appleseed AI substack.


Read More

Overbroad AI Export Controls Risk Forfeiting the AI Race
a black keyboard with a blue button on it

Overbroad AI Export Controls Risk Forfeiting the AI Race

The nation that wins the global AI race will hold decisive military and economic advantages. That’s why President Trump’s January 2025 AI Action Plan declared: “It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

However, AI global dominance does not just mean producing the best AI systems. It also means that the American “AI Stack” – the layered collection of tools, technologies, and frameworks that organizations use to build, train, deploy, and manage artificial intelligence applications – will become the international standard for this world-changing technology. As such, advancing a commonsense export policy for American AI chips will play a decisive role in determining whether the United States remains embedded at the core of global AI development or is gradually displaced by rival systems.

Keep ReadingShow less
Digital generated image of green semi transparent AI word on white circuit board visualizing smart technology.

What can the success of SEMATECH teach us about winning the AI race? Explore how a bold U.S. public-private partnership revived the semiconductor industry—and why a similar model could be key to advancing AI innovation today.

Getty Images, Andriy Onufriyenko

A Proven Playbook for AI Leadership: Lessons from America’s Chip Comeback

Imagine waking up to this paragraph in your favorite newspaper:

The willingness of the U.S. government to eschew partisanship and undertake a bold experiment -- an experiment based on cooperation as opposed to traditional procurement, and with accountability standards rooted in trust instead of elaborate regulations -- has led the U.S. to a position of preeminence in an industry which is vital to our nation's security and economic well-being.

Keep ReadingShow less
A large group of people is depicted while invisible systems actively scan and analyze individuals within the crowd

Anthropic’s lawsuit against the Trump administration over a Pentagon “supply-chain risk” label raises major constitutional questions about AI policy, corporate speech, and political retaliation.

Getty Images, Flavio Coelho

Anthropic Sues Trump Over ‘Unlawful’ AI Retaliation

Anthropic’s dispute with the Trump administration is no longer just about AI policy; it has escalated into a constitutional test of whether American companies can uphold their values against political retaliation. After the administration labeled Anthropic a “supply‑chain risk”, a designation historically reserved for foreign adversaries, and ordered federal agencies to cease using its technology, the company did not yield. Instead, Anthropic filed two lawsuits: one in the Northern District of California and another in the D.C. Circuit, each challenging different aspects of the government’s actions and calling them “unprecedented and unlawful.”

The Pentagon has now formally issued the supply‑chain risk designation, triggering immediate cancellations of federal contracts and jeopardizing “hundreds of millions of dollars” in near‑term revenue. Anthropic’s filings describe the losses as “unrecoverable,” with reputational damage compounding the financial harm. Yet even as the government blacklists the company, the Pentagon continues using Claude in classified systems because the model is deeply embedded in wartime workflows. This contradiction underscores the political nature of the designation: a tool deemed too “dangerous” to be used by federal agencies is simultaneously indispensable in active military operations.

Keep ReadingShow less
An illustration of a person standing on a giant robotic hand.

As AI transforms the labor market, the U.S. faces a familiar challenge: preparing workers for new skills. A look at a 1991 Labor Department report reveals striking parallels.

Getty Images, Andriy Onufriyenko

We’ve Been "Preparing" for the Future Since 1991—It Hasn't Worked

“Today, the demands on business and workers are different. Firms must meet world-class standards, and so must workers. Employers seek adaptability and the ability to learn and work in teams.”

Sound familiar?

Keep ReadingShow less