Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Fear of AI Makes for Bad Policy

Opinion

Fear of AI Makes for Bad Policy
Getty Images

Fear is the worst possible response to AI. Actions taken out of fear are rarely a good thing, especially when it comes to emerging technology. Empirically-driven scrutiny, on the other hand, is a savvy and necessary reaction to technologies like AI that introduce great benefits and harms. The difference is allowing emotions to drive policy rather than ongoing and rigorous evaluation.

A few reminders of tech policy gone wrong, due, at least in part, to fear, helps make this point clear. Fear is what has led the US to become a laggard in nuclear energy, while many of our allies and adversaries enjoy cheaper, more reliable energy. Fear is what explains opposition to autonomous vehicles in some communities, while human drivers are responsible for 120 deaths per day, as of 2022. Fear is what sustains delays in making drones more broadly available, even though many other countries are tackling issues like rural access to key medicine via drones.


Again, this is not to say that new technology should automatically be treated as trustworthy, nor that individuals may not have some emotional response when a new creation is introduced into the world. It’s human nature to be skeptical and perhaps even scared of the new and novel. But to allow those emotions to rob us of our agency and to dictate our policy is a step too far. Yet, that’s where much of AI policy seems headed.

State legislatures have rushed forward with AI bills that aim to put this technology back in the bottle and freeze the status quo in amber. Bans on AI therapy tools, limitations on AI companions, and related legislation are understandable when viewed from an emotional perspective. Following the social media era, it’s unsurprising that many of us feel disgust, anger, sadness, and unease by the idea of our kids again jumping on platforms of unknown capabilities and effects. Count me asking those who are worried about helping our kids (and adults) navigate the Intelligence Age. But those emotions should not excessively steer our policy response to AI. Through close scrutiny of AI, we can make sure that policy is not resulting in unintended consequences, such as denying children the use of AI tools that could actually improve their physical and mental health.

The path to this more deliberate policy approach starts with combating the source of AI fear.

Fear of AI is often a response to the bogus claim that it’s beyond the control of humans. The core aspects of developing and deploying AI are the product of decisions made by people just like you and me. What data is available for AI training is subject to choices made by human actors. Laws often prevent certain data from being disclosed and later used for AI training. Technical systems can prevent data from being scraped from the Internet. Norms and business incentives influence what data even gets created and how it is stored and shared.

How and when AI companies release models is a function of human decisions. The structure of the AI market and the demand for AI products are variables that we can all shape, at least directly, through our representatives and purchasing decisions.

Integration of AI tools into sensitive contexts, such as schools and hospitals, is wholly a matter of human choices. Leaders and stakeholders of those institutions are anything but powerless when it comes to AI tool adoption. These folks are free to budget a lot or a little toward what AI tools they purchase. They can dictate what training, if any, their staff needs to receive before using those tools. They can impose strict procurement standards for any AI tools that can be acquired.

It’s very true that each of us has varying degrees of influence on how AI is developed and deployed, but it’s a dangerous myth that we’ve lost agency at this important societal juncture.

This recognition of our agency is a license to collectively build the tech we want to see, not a mandate to stop its development. A society that acts out of fear defaults to prohibition, sacrificing tangible progress to avoid speculative harms. It chooses scarcity. A confident society, by contrast, establishes the conditions for responsible innovation to flourish, viewing risk not as something to be eliminated, but as something to be managed intelligently in the pursuit of a more abundant future.

The most effective way to foster this environment is not through a new thicket of prescriptive regulations, but through the clarification and modernization of our existing laws and reliance on healthy, competitive markets. Adaptive laws and robust competition have successfully governed centuries of technological change and can do so in the age of AI.

This approach creates powerful incentives for developers to prioritize safety and reliability, not to satisfy a bureaucratic checklist, but because it is the surest path to success in the marketplace. When innovators have a clear understanding of their responsibilities, and consumers are confident that their rights are protected, progress can accelerate. This is the true alternative to a policy of fear: a legal system and marketplace that enables dynamism, demands responsibility, and is squarely focused on unleashing the immense benefits of innovation.


Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.


Read More

An illustration of a person standing alone on a platform and looking at speech bubbles.

A bold critique of modern democracy and rising authoritarian ideas, exploring how AI-powered swarm digital democracy could redefine participation and governance.

Getty Images, Andriy Onufriyenko

The Only Radical Move Forward: Swarm Digital Democracy

We are increasingly told that democracy has failed and that its time has passed. The evidence proffered is everywhere, we are told: Gridlock, captured institutions, performative elections, a public that senses, correctly, that its voice rarely translates into real power. Into this vacuum step dystopic movements like the Dark Enlightenment and harder strains of Right-wing populism, offering a stark diagnosis and an even starker cure: Abandon the illusion of popular rule and return to forms of authority that are decisive, hierarchical, and unapologetically exclusionary. They present themselves as bold, clear-eyed, rambunctious, alive, and willing to act where others hesitate. And all to save the world from itself.

But this framing depends on a sleight of hand: It assumes that what we have been living under is, in fact, democracy, and that its failures are the failures of democracy itself. That is the first mistake.

Keep ReadingShow less
An illustration of orange-colored megaphones, one megaphone in the middle is red and facing the opposite direction of the others.

A growing crisis threatens U.S. public data. Experts warn disappearing federal datasets could undermine science, policy, and democracy—and outline a plan to protect them.

Getty Images, Richard Drury

America's Data Crisis: Saving Trusted Facts Is Essential to Democracy

In March 2026, more than a hundred information and data experts gathered in a converted Christian Science church to confront a problem most Americans never see, but that shapes nearly every public debate we have. The nonprofit Internet Archive convened this national Information Stewardship Forum at their San Francisco headquarters because something fundamental is breaking: the country’s shared foundation of facts.

For decades, the United States has relied on a vast ecosystem of federal data on health, climate, the economy, education, demographics, scientific research, and more. This data is the backbone of journalism, policymaking, scientific discovery, and public accountability. It is how we know whether the air is safe to breathe, whether unemployment is rising or falling, whether a new disease is spreading, or whether a community is being left behind.

Keep ReadingShow less
Man lying in his bed, on his phone at night.

As the 2026 election approaches, doomscrolling and social media are shaping voter behavior through fear and anxiety. Learn how digital news consumption influences political decisions—and how to break the cycle for more informed voting.

Getty Images, gorodenkoff

Americans Are Doomscrolling Their Way to the Ballot Box and Only Getting Empty Promises

As the spring primary cycle ramps up, voters are deciding which candidates to elect in the November general election, but too much doomscrolling on social media is leading to uninformed — and often anxiety-based — voting. Even though online platforms and politicians may be preying on our exhaustion to further their agendas, we don’t have to fall for it this election cycle.

Doomscrolling is, unfortunately, part of daily life for many of us. It involves consuming a virtually endless amount of negative social media posts and news content, causing us to feel scared and depressed. Our brains have a hardwired negativity bias that causes us to notice potential threats and focus on them. This is exacerbated by the fact that people who closely follow or participate in politics are more likely to doomscroll.

Keep ReadingShow less
The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less