Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Fear of AI Makes for Bad Policy

Opinion

Fear of AI Makes for Bad Policy
Getty Images

Fear is the worst possible response to AI. Actions taken out of fear are rarely a good thing, especially when it comes to emerging technology. Empirically-driven scrutiny, on the other hand, is a savvy and necessary reaction to technologies like AI that introduce great benefits and harms. The difference is allowing emotions to drive policy rather than ongoing and rigorous evaluation.

A few reminders of tech policy gone wrong, due, at least in part, to fear, helps make this point clear. Fear is what has led the US to become a laggard in nuclear energy, while many of our allies and adversaries enjoy cheaper, more reliable energy. Fear is what explains opposition to autonomous vehicles in some communities, while human drivers are responsible for 120 deaths per day, as of 2022. Fear is what sustains delays in making drones more broadly available, even though many other countries are tackling issues like rural access to key medicine via drones.


Again, this is not to say that new technology should automatically be treated as trustworthy, nor that individuals may not have some emotional response when a new creation is introduced into the world. It’s human nature to be skeptical and perhaps even scared of the new and novel. But to allow those emotions to rob us of our agency and to dictate our policy is a step too far. Yet, that’s where much of AI policy seems headed.

State legislatures have rushed forward with AI bills that aim to put this technology back in the bottle and freeze the status quo in amber. Bans on AI therapy tools, limitations on AI companions, and related legislation are understandable when viewed from an emotional perspective. Following the social media era, it’s unsurprising that many of us feel disgust, anger, sadness, and unease by the idea of our kids again jumping on platforms of unknown capabilities and effects. Count me asking those who are worried about helping our kids (and adults) navigate the Intelligence Age. But those emotions should not excessively steer our policy response to AI. Through close scrutiny of AI, we can make sure that policy is not resulting in unintended consequences, such as denying children the use of AI tools that could actually improve their physical and mental health.

The path to this more deliberate policy approach starts with combating the source of AI fear.

Fear of AI is often a response to the bogus claim that it’s beyond the control of humans. The core aspects of developing and deploying AI are the product of decisions made by people just like you and me. What data is available for AI training is subject to choices made by human actors. Laws often prevent certain data from being disclosed and later used for AI training. Technical systems can prevent data from being scraped from the Internet. Norms and business incentives influence what data even gets created and how it is stored and shared.

How and when AI companies release models is a function of human decisions. The structure of the AI market and the demand for AI products are variables that we can all shape, at least directly, through our representatives and purchasing decisions.

Integration of AI tools into sensitive contexts, such as schools and hospitals, is wholly a matter of human choices. Leaders and stakeholders of those institutions are anything but powerless when it comes to AI tool adoption. These folks are free to budget a lot or a little toward what AI tools they purchase. They can dictate what training, if any, their staff needs to receive before using those tools. They can impose strict procurement standards for any AI tools that can be acquired.

It’s very true that each of us has varying degrees of influence on how AI is developed and deployed, but it’s a dangerous myth that we’ve lost agency at this important societal juncture.

This recognition of our agency is a license to collectively build the tech we want to see, not a mandate to stop its development. A society that acts out of fear defaults to prohibition, sacrificing tangible progress to avoid speculative harms. It chooses scarcity. A confident society, by contrast, establishes the conditions for responsible innovation to flourish, viewing risk not as something to be eliminated, but as something to be managed intelligently in the pursuit of a more abundant future.

The most effective way to foster this environment is not through a new thicket of prescriptive regulations, but through the clarification and modernization of our existing laws and reliance on healthy, competitive markets. Adaptive laws and robust competition have successfully governed centuries of technological change and can do so in the age of AI.

This approach creates powerful incentives for developers to prioritize safety and reliability, not to satisfy a bureaucratic checklist, but because it is the surest path to success in the marketplace. When innovators have a clear understanding of their responsibilities, and consumers are confident that their rights are protected, progress can accelerate. This is the true alternative to a policy of fear: a legal system and marketplace that enables dynamism, demands responsibility, and is squarely focused on unleashing the immense benefits of innovation.


Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

Affordability Crisis and AI: Kelso’s Universal Capitalism

Rising costs, AI disruption, and inequality revive interest in Louis Kelso’s “universal capitalism” as a market-based answer to the affordability crisis.

Getty Images, J Studios

Affordability Crisis and AI: Kelso’s Universal Capitalism

“Affordability” over the cost of living has been in the news a lot lately. It’s popping up in political campaigns, from the governor’s races in New Jersey and Virginia to the mayor’s races in New York City and Seattle. President Donald Trump calls the term a “hoax” and a “con job” by Democrats, and it’s true that the inflation rate hasn’t increased much since Trump began his second term in January.

But a number of reports show Americans are struggling with high costs for essentials like food, housing, and utilities, leaving many families feeling financially pinched. Total consumer spending over the Black Friday-Thanksgiving weekend buying binge actually increased this year, but a Salesforce study found that’s because prices were about 7% higher than last year’s blitz. Consumers actually bought 2% fewer items at checkout.

Keep ReadingShow less
Censorship Should Be Obsolete by Now. Why Isn’t It?

US Capital with tech background

Greggory DiSalvo/Getty Images

Censorship Should Be Obsolete by Now. Why Isn’t It?

Techies, activists, and academics were in Paris this month to confront the doom scenario of internet shutdowns, developing creative technology and policy solutions to break out of heavily censored environments. The event– SplinterCon– has previously been held globally, from Brussels to Taiwan. I am on the programme committee and delivered a keynote at the inaugural SplinterCon in Montreal on how internet standards must be better designed for censorship circumvention.

Censorship and digital authoritarianism were exposed in dozens of countries in the recently published Freedom on the Net report. For exampl,e Russia has pledged to provide “sovereign AI,” a strategy that will surely extend its network blocks on “a wide array of social media platforms and messaging applications, urging users to adopt government-approved alternatives.” The UK joined Vietnam, China, and a growing number of states requiring “age verification,” the use of government-issued identification cards, to access internet services, which the report calls “a crisis for online anonymity.”

Keep ReadingShow less
The concept of AI hovering among the public.

Panic-driven legislation—from airline safety to AI bans—often backfires, and evidence must guide policy.

Getty Images, J Studios

Beware of Panic Policies

"As far as human nature is concerned, with panic comes irrationality." This simple statement by Professor Steve Calandrillo and Nolan Anderson has profound implications for public policy. When panic is highest, and demand for reactive policy is greatest, that's exactly when we need our lawmakers to resist the temptation to move fast and ban things. Yet, many state legislators are ignoring this advice amid public outcries about the allegedly widespread and destructive uses of AI. Thankfully, Calandrillo and Anderson have identified a few examples of what I'll call "panic policies" that make clear that proposals forged by frenzy tend not to reflect good public policy.

Let's turn first to a proposal in November of 2001 from the American Academy of Pediatrics (AAP). For obvious reasons, airline safety was subject to immense public scrutiny at this time. AAP responded with what may sound like a good idea: require all infants to have their own seat and, by extension, their own seat belt on planes. The existing policy permitted parents to simply put their kid--so long as they were under two--on their lap. Essentially, babies flew for free.

The Federal Aviation Administration (FAA) permitted this based on a pretty simple analysis: the risks to young kids without seatbelts on planes were far less than the risks they would face if they were instead traveling by car. Put differently, if parents faced higher prices to travel by air, then they'd turn to the road as the best way to get from A to B. As we all know (perhaps with the exception of the AAP at the time), airline travel is tremendously safer than travel by car. Nevertheless, the AAP forged ahead with its proposal. In fact, it did so despite admitting that they were unsure of whether the higher risks of mortality of children under two in plane crashes were due to the lack of a seat belt or the fact that they're simply fragile.

Keep ReadingShow less
Will Generative AI Robots Replace Surgeons?

Generative AI and surgical robotics are advancing toward autonomous surgery, raising new questions about safety, regulation, payment models, and trust.

Getty Images, Luis Alvarez

Will Generative AI Robots Replace Surgeons?

In medicine’s history, the best technologies didn’t just improve clinical practice. They turned traditional medicine on its head.

For example, advances like CT, MRI, and ultrasound machines did more than merely improve diagnostic accuracy. They diminished the importance of the physical exam and the physicians who excelled at it.

Keep ReadingShow less