Skip to content
Search

Latest Stories

Top Stories

Fear of AI Makes for Bad Policy

Opinion

Fear of AI Makes for Bad Policy
Getty Images

Fear is the worst possible response to AI. Actions taken out of fear are rarely a good thing, especially when it comes to emerging technology. Empirically-driven scrutiny, on the other hand, is a savvy and necessary reaction to technologies like AI that introduce great benefits and harms. The difference is allowing emotions to drive policy rather than ongoing and rigorous evaluation.

A few reminders of tech policy gone wrong, due, at least in part, to fear, helps make this point clear. Fear is what has led the US to become a laggard in nuclear energy, while many of our allies and adversaries enjoy cheaper, more reliable energy. Fear is what explains opposition to autonomous vehicles in some communities, while human drivers are responsible for 120 deaths per day, as of 2022. Fear is what sustains delays in making drones more broadly available, even though many other countries are tackling issues like rural access to key medicine via drones.


Again, this is not to say that new technology should automatically be treated as trustworthy, nor that individuals may not have some emotional response when a new creation is introduced into the world. It’s human nature to be skeptical and perhaps even scared of the new and novel. But to allow those emotions to rob us of our agency and to dictate our policy is a step too far. Yet, that’s where much of AI policy seems headed.

State legislatures have rushed forward with AI bills that aim to put this technology back in the bottle and freeze the status quo in amber. Bans on AI therapy tools, limitations on AI companions, and related legislation are understandable when viewed from an emotional perspective. Following the social media era, it’s unsurprising that many of us feel disgust, anger, sadness, and unease by the idea of our kids again jumping on platforms of unknown capabilities and effects. Count me asking those who are worried about helping our kids (and adults) navigate the Intelligence Age. But those emotions should not excessively steer our policy response to AI. Through close scrutiny of AI, we can make sure that policy is not resulting in unintended consequences, such as denying children the use of AI tools that could actually improve their physical and mental health.

The path to this more deliberate policy approach starts with combating the source of AI fear.

Fear of AI is often a response to the bogus claim that it’s beyond the control of humans. The core aspects of developing and deploying AI are the product of decisions made by people just like you and me. What data is available for AI training is subject to choices made by human actors. Laws often prevent certain data from being disclosed and later used for AI training. Technical systems can prevent data from being scraped from the Internet. Norms and business incentives influence what data even gets created and how it is stored and shared.

How and when AI companies release models is a function of human decisions. The structure of the AI market and the demand for AI products are variables that we can all shape, at least directly, through our representatives and purchasing decisions.

Integration of AI tools into sensitive contexts, such as schools and hospitals, is wholly a matter of human choices. Leaders and stakeholders of those institutions are anything but powerless when it comes to AI tool adoption. These folks are free to budget a lot or a little toward what AI tools they purchase. They can dictate what training, if any, their staff needs to receive before using those tools. They can impose strict procurement standards for any AI tools that can be acquired.

It’s very true that each of us has varying degrees of influence on how AI is developed and deployed, but it’s a dangerous myth that we’ve lost agency at this important societal juncture.

This recognition of our agency is a license to collectively build the tech we want to see, not a mandate to stop its development. A society that acts out of fear defaults to prohibition, sacrificing tangible progress to avoid speculative harms. It chooses scarcity. A confident society, by contrast, establishes the conditions for responsible innovation to flourish, viewing risk not as something to be eliminated, but as something to be managed intelligently in the pursuit of a more abundant future.

The most effective way to foster this environment is not through a new thicket of prescriptive regulations, but through the clarification and modernization of our existing laws and reliance on healthy, competitive markets. Adaptive laws and robust competition have successfully governed centuries of technological change and can do so in the age of AI.

This approach creates powerful incentives for developers to prioritize safety and reliability, not to satisfy a bureaucratic checklist, but because it is the surest path to success in the marketplace. When innovators have a clear understanding of their responsibilities, and consumers are confident that their rights are protected, progress can accelerate. This is the true alternative to a policy of fear: a legal system and marketplace that enables dynamism, demands responsibility, and is squarely focused on unleashing the immense benefits of innovation.


Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

A child looking at a smartphone.

With autism rates doubling every decade, scientists are reexamining environmental and behavioral factors. Could the explosion of social media use since the 1990s be influencing neurodevelopment? A closer look at the data, the risks, and what research must uncover next.

Getty Images, Arindam Ghosh

The Increase in Autism and Social Media – Coincidence or Causal?

Autism has been in the headlines recently because of controversy over Robert F. Kennedy, Jr's statements. But forgetting about Kennedy, autism is headline-worthy because of the huge increase in its incidence over the past two decades and its potential impact on not just the individual children but the health and strength of our country.

In the 1990s, a new definition of autism—ASD (Autism Spectrum Disorder)—was universally adopted. Initially, the prevalence rate was pretty stable. In the year 2,000, with this broader definition and better diagnosis, the CDC estimated that one in 150 eight-year-olds in the U.S. had an autism spectrum disorder. (The reports always study eight-year-olds, so this data was for children born in 1992.)

Keep ReadingShow less
Tech, Tribalism, and the Erosion of Human Connection
Ai technology, Artificial Intelligence. man using technology smart robot AI, artificial intelligence by enter command prompt for generates something, Futuristic technology transformation.
Getty Images - stock photo

Tech, Tribalism, and the Erosion of Human Connection

One of the great gifts of the Enlightenment age was the centrality of reason and empiricism as instruments to unleash the astonishing potential of human capacity. Great Enlightenment thinkers recognized that human beings have the capacity to observe the universe and rely on logical thinking to solve problems.

Moreover, these were not just lofty ideals; Benjamin Franklin and Denis Diderot demonstrated that building our collective constitution of knowledge could greatly enhance human prosperity not only for the aristocratic class but for all participants in the social contract. Franklin’s “Poor Richard’s Almanac” and Diderot and d’Alembert’s “Encyclopédie” served as the Enlightenment’s machines de guerre, effectively providing broad access to practical knowledge, empowering individuals to build their own unique brand of prosperity.

Keep ReadingShow less
The limits of free speech protections in American broadcasting

FCC Chairman Brendan Carr testifies in Washington on May 21, 2025.

The limits of free speech protections in American broadcasting

The chairman of the Federal Communications Commission is displeased with a broadcast network. He makes his displeasure clear in public speeches, interviews and congressional testimony.

The network, afraid of the regulatory agency’s power to license their owned-and-operated stations, responds quickly. They change the content of their broadcasts. Network executives understand the FCC’s criticism is supported by the White House, and the chairman implicitly represents the president.

Keep ReadingShow less
MQ-9 Predator Drones Hunt Migrants at the Border
Way into future, RPA Airmen participate in Red Flag 16-2 > Creech ...

MQ-9 Predator Drones Hunt Migrants at the Border

FT HUACHUCA, Ariz. - Inside a windowless and dark shipping container turned into a high-tech surveillance command center, two analysts peered at their own set of six screens that showed data coming in from an MQ-9 Predator B drone. Both were looking for two adults and a child who had crossed the U.S.-Mexico border and had fled when a Border Patrol agent approached in a truck.

Inside the drone hangar on the other side of the Fort Huachuca base sat another former shipping container, this one occupied by a drone pilot and a camera operator who pivoted the drone's camera to scan nine square miles of shrubs and saguaros for the migrants. Like the command center, the onetime shipping container was dark, lit only by the glow of the computer screens.

Keep ReadingShow less