Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The Future We’ll Miss: Political Inaction Holds Back AI's Benefits

Opinion

Two people looking at computer screens with data.

A call to rethink AI governance argues that the real danger isn’t what AI might do—but what we’ll fail to do with it. Meet TFWM: The Future We’ll Miss.

Getty Images, Cravetiger

We’re all familiar with the motivating cry of “YOLO” right before you do something on the edge of stupidity and exhilaration.

We’ve all seen the “TL;DR” section that shares the key takeaways from a long article.


And, we’ve all experienced “FOMO” when our friends make plans and we feel compelled to tag along just to make sure we’re not left on the sidelines of an epic experience.

Let’s give a name to our age’s most haunting anxiety: TFWM—The Future We’ll Miss. It’s the recognition that future generations may ask why, when faced with tools to cure, create, and connect, we chose to maintain the status quo. Let’s run through a few examples to make this a little clearer:

  • AI can detect breast cancer earlier than humans and save millions in treatments and perhaps even thousands of lives. Yet, AI use in medical contexts is often tied up in red tape. #TFWM
  • New understanding of the interior design of cells via AI tools has the potential to increase drug development. AI researchers are still struggling to find the computing necessary to run their experiments. #TFWM
  • Weather forecasts empowered by AI may soon allow us to detect storms ten days earlier. A shortage of access to quality data may delay improvements and adoption of these tools. #TFWM
  • Firefighters have turned to VR exercises to gain valuable experience fighting fires in novel, extreme context. It’s the sort of practice that can make a big difference when the next spark appears. Limited AI readiness among local and state governments, however, stands in the way. #TFWM

I could go on (and I will in future posts). The point is that in several domains, we’re making the affirmative choice to extend the status quo despite viable alternatives to further human flourishing. Barriers to spreading these AI tools across jurisdictions are eminently solvable. Whether it’s budgetary constraints, regulatory hurdles, or public skepticism, all of these hindrances can be removed with enough political will.

So, why am I trying to make #TFWM a “thing"? In other words, why is it important to increase awareness of this perspective? The AI debate is being framed by questions that have distracted us from the practical policy challenges we need to address to bring about a better future.

The first set of distracting questions is some variant of: "Will AI become a sentient overlord and end humanity?" This is a debate about a speculative, distant future that conveniently distracts us from the very real, immediate lives we could be saving today.

The second set of questions is along the lines of “How many jobs will AI destroy?” This is a valid, but defensive and incomplete, question. It frames innovation as a zero-sum threat rather than asking the more productive question: “How can we deploy these tools to make our work more meaningful, creative, and valuable?”

Finally, there’s a tranche of questions related to some of the technical aspects of AI, like “Can we even trust what it says?” This concern over AI "hallucinations," while a real technical challenge, is often used to dismiss the technology's proven, superhuman accuracy in specific, life-saving domains, such as in medical settings.

A common thread ties these inquiries together. These questions are passive. They ask, “What will AI do to us?”

TFWM flips the script. It demands we ask the active and urgent question: “What will we fail to do with AI?”

The real risk isn't just that AI might go wrong. The real, measurable risk is that we won't let it go right. The tragedy is not a robot uprising that makes for good sci-fi but bad public policy; it's the preventable cancer, the missed storm warning, the failed drug trial. The problem isn't the technology; it's our failure of political will and, more pointedly, our failure of legal and regulatory imagination.

This brings us to why TFWM needs to be a "thing."

FOMO, for all its triviality, is a powerful motivator. It’s a personal anxiety that causes action. It gets you off the couch, into the Lyft, and into the party.

TFWM must become our new civic anxiety. It’s not the fear of missing a party; it's the fear of being judged by posterity. It is the deep, haunting dread that our grandchildren will look back at this moment of historic opportunity and ask us, “You had the tools to solve this. Why didn't you?”

This perspective creates the political will we desperately need. It reframes our entire approach to governance. It shifts the burden of proof from innovators to the status quo. The question is no longer, "Can you prove this new tool is 100% perfect and carries zero risk?" The question becomes, "Can you prove that our current system—with all its human error, bias, cost, and delay—is better than the alternative?"

YOLO, FOMO, and TL;DR are shorthand for navigating our personal lives. TFWM is the shorthand for our collective responsibility. The status quo is not a safe, neutral position. It is an active choice, and it has a body count. The future we'll miss isn't inevitable. It's a decision. And right now, we are deciding to miss it every single day we fail to act.


Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and author of the Appleseed AI substack.


Read More

The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less