Skip to content
Search

Latest Stories

Follow Us:
Top Stories

In Defense of AI Optimism

Opinion

Two people looking at screens.

A case for optimism, risk-taking, and policy experimentation in the age of AI—and why pessimism threatens technological progress.

Getty Images, Andriy Onufriyenko

Society needs people to take risks. Entrepreneurs who bet on themselves create new jobs. Institutions that gamble with new processes find out best to integrate advances into modern life. Regulators who accept potential backlash by launching policy experiments give us a chance to devise laws that are based on evidence, not fear.

The need for risk taking is all the more important when society is presented with new technologies. When new tech arrives on the scene, defense of the status quo is the easier path--individually, institutionally, and societally. We are all predisposed to think that the calamities, ailments, and flaws we experience today--as bad as they may be--are preferable to the unknowns tied to tomorrow.


This mental handicap probably helped us survive at some point, but excessive hesitancy can be paralyzing in the short-run and fatal in the long-run. Think of the lives that could have been saved had seat belts been adopted sooner. Imagine the diplomacy that may have occurred and, by extension, the wars avoided, if the telegraph were available decades earlier. Ponder how diffusion of electricity across America over the course of a few years, rather than a few decades, could have improved the quality of life for millions.

Each of these technological advances required individuals willing to test ideas, to fail, and to persist. Seat belts were far from popular when initially introduced. People doubted their efficacy and pushed back on related regulations. The officials and organizations that saw through skepticism and worked diligently to provide more evidence, demonstrations, and case studies related to these novel devices deserve tremendous thanks.

The laying of the first telegraph cables did not go well. Rough seas and resource constraints all made this infrastructure feat something that a risk-averse person would avoid like the middle seat. Yet, a few such people didn’t shy away from the hope of rapid communication. We’re in their debt, too.

Advocates for electricity faced their own hurdles. Consider Lyndon B. Johnson, then a just local politician, forcefully pushing the federal government to invest in the electrification of rural Texas. Some thought such investments were unnecessary or better left to another time. Johnson and others insisted.

Risk-taking, in hindsight, tends to look like the common sensical path. Of course, there are exceptions--there’s a difference between risks and true gambles. The former are grounded in more than mere speculation; they are based on specific moral principles and technological understandings. When people take those kinds of risks, we all tend to benefit.

The same is true in the Age of AI. Many Americans are understandably underwhelmed by artificial intelligence (AI) systems that seem little more than slop machines and job destroyers. It’s politically and culturally easy to take the view that AI is a net negative and to resist its application in new situations.

That’s precisely why we need another generation of risk-takers and, to be more precise, optimists. We won’t realize the benefits of AI in health care, education, and transportation, unless three conditions are met: policymakers with sufficient popular support to experiment with novel regulations; institutions with the proper staff, technology, and financial flexibility to test new workflows and develop new products; and, founders with access to the funds required to build the AI we actually want.

None of these conditions will be satisfied if pessimism abounds. Pessimism induces zero-sum thinking. You’ll rarely meet a doomsday prepper keen to share their cans of beans. Extreme doubt about tomorrow saps risk-taking energy like a wet blanket on a bonfire.

Skepticism, however, is necessary. It’s grounded in curiosity and invites further investigation. What’s even better, though, is optimism. Optimism cultivates risk-taking by making it socially-, financially-, and politically-easier to bet on the future.

Many aspects of technological disruption caution against such optimism. We’ve heard the promise of technology before, only to see it fray our social fabric and upend our economy. That’s why optimism must be paired with the proper institutional governance that fosters the right distribution of risk and reward.

To borrow from Betsey Stevenson, “The lesson is not that technology is bad, but that productivity gains do not automatically translate into flourishing. They only do so when societies build institutions that make the new economic regime first tolerable, and then genuinely beneficial, for most people.”

But that core task—building, redesigning—won’t occur if pessimism is pervasive. It requires the sort of imagination and investment only possible with some degree of optimism.

The tricky part is how to generate that outlook. There’s no deposit of optimism in some mine—it’s something we have to create and sustain. The easiest place to start is challenging prophets of doom. Their ubiquity and dominance in the headlines quashes the seeds of hope. Simply by challenging those who say our best days are behind us, we can get closer to betting that there are brighter days ahead.


Kevin Frazier is a Senior Fellow at the Abundance Institute and directs the AI Innovation and Law Program at the University of Texas School of Law

Read More

Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate
the letters are made up of different colors

Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate

This nonpartisan policy brief, written by an ACE fellow, is republished by The Fulcrum as part of our partnership with the Alliance for Civic Engagement and our NextGen initiative — elevating student voices, strengthening civic education, and helping readers better understand democracy and public policy.

Key takeaways

  • The U.S. has no national AI liability law. Instead, a patchwork of state laws has emerged which has resulted in legal protections being dependent on where an individual resides.
  • It’s often unclear who is legally responsible when AI causes harm. This gap leaves many people with no clear path to seek help.
  • In March 2026, the White House and Congress introduced major proposals to establish a federal standard, but there is significant disagreement about whether that standard should prioritize protecting innovation or protecting people harmed by AI systems.

Background: A Patchwork of State Laws

Without a national AI law, states have been filling in the gaps on their own. The result is an uneven landscape where a person’s legal protections depend entirely on which state they live in.

Keep ReadingShow less
Teenager admiring electronic hobby robot.

Explore how China is overtaking the U.S. in the global innovation race, from electric vehicles to advanced research, and why America’s fragmented science policy, talent loss, and weak industrial strategy threaten its technological leadership.

Getty Images, Willie B. Thomas

America’s Greatest Geopolitical Blind Spot

The global hierarchy of innovation is undergoing a structural shift that Washington is dangerously slow to acknowledge. For decades, the prevailing narrative in the United States was that China was merely the "world’s factory"—a nation capable of mass-producing Western designs but inherently lacking the creative spark to invent its own. This assumption has been shattered. Today, Beijing is no longer playing catch-up; in sectors ranging from electric vehicles and next-generation nuclear power to hypersonic missiles, China is setting the pace.

The central challenge is that China has mastered the entire innovation ecosystem, while the United States has allowed its own to fracture. Innovation is not just about a "eureka" moment in a laboratory; it is a relay race that begins with basic scientific research, moves through the training of specialized talent, and ends with the large-scale commercialization of "hard tech." China is currently winning every leg of that race.

Keep ReadingShow less
An illustration of a person standing alone on a platform and looking at speech bubbles.

A bold critique of modern democracy and rising authoritarian ideas, exploring how AI-powered swarm digital democracy could redefine participation and governance.

Getty Images, Andriy Onufriyenko

The Only Radical Move Forward: Swarm Digital Democracy

We are increasingly told that democracy has failed and that its time has passed. The evidence proffered is everywhere, we are told: Gridlock, captured institutions, performative elections, a public that senses, correctly, that its voice rarely translates into real power. Into this vacuum step dystopic movements like the Dark Enlightenment and harder strains of Right-wing populism, offering a stark diagnosis and an even starker cure: Abandon the illusion of popular rule and return to forms of authority that are decisive, hierarchical, and unapologetically exclusionary. They present themselves as bold, clear-eyed, rambunctious, alive, and willing to act where others hesitate. And all to save the world from itself.

But this framing depends on a sleight of hand: It assumes that what we have been living under is, in fact, democracy, and that its failures are the failures of democracy itself. That is the first mistake.

Keep ReadingShow less
Judge's Gavel Hammer as a Symbol of Law and Order with Processor CPU AI Chip.

Elon Musk’s xAI company is challenging AI regulations in Colorado after losing in California, arguing that limits on artificial intelligence violate free speech. As Connecticut enforces its own AI law, this case could shape the future of AI regulation, corporate accountability, and constitutional rights in the United States.

Getty Images, Alexander Sikov

xAI Pushes Free Speech Theory Into New AI Lawsuits

Elon Musk's AI company, xAI, is on a legal road trip. After losing in California, it filed suit in Colorado asking a court to declare the state's artificial intelligence regulations unconstitutional. The argument is essentially the same one that already failed. Meet the new boss. Same as the old boss.

For Connecticut residents, this is not just the next state in the alphabet that has passed AI legislation. Connecticut was one of the first states in the nation to adopt an AI law, requiring companies to disclose when AI is being used in critical decisions like employment, housing, credit, or healthcare. That law is already drawing scrutiny from the technology industry. What xAI tried to do in California and now in Colorado is a preview of what we may face in Connecticut.

Keep ReadingShow less