Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Fear of AI Makes for Bad Policy

Opinion

Fear of AI Makes for Bad Policy
Getty Images

Fear is the worst possible response to AI. Actions taken out of fear are rarely a good thing, especially when it comes to emerging technology. Empirically-driven scrutiny, on the other hand, is a savvy and necessary reaction to technologies like AI that introduce great benefits and harms. The difference is allowing emotions to drive policy rather than ongoing and rigorous evaluation.

A few reminders of tech policy gone wrong, due, at least in part, to fear, helps make this point clear. Fear is what has led the US to become a laggard in nuclear energy, while many of our allies and adversaries enjoy cheaper, more reliable energy. Fear is what explains opposition to autonomous vehicles in some communities, while human drivers are responsible for 120 deaths per day, as of 2022. Fear is what sustains delays in making drones more broadly available, even though many other countries are tackling issues like rural access to key medicine via drones.


Again, this is not to say that new technology should automatically be treated as trustworthy, nor that individuals may not have some emotional response when a new creation is introduced into the world. It’s human nature to be skeptical and perhaps even scared of the new and novel. But to allow those emotions to rob us of our agency and to dictate our policy is a step too far. Yet, that’s where much of AI policy seems headed.

State legislatures have rushed forward with AI bills that aim to put this technology back in the bottle and freeze the status quo in amber. Bans on AI therapy tools, limitations on AI companions, and related legislation are understandable when viewed from an emotional perspective. Following the social media era, it’s unsurprising that many of us feel disgust, anger, sadness, and unease by the idea of our kids again jumping on platforms of unknown capabilities and effects. Count me asking those who are worried about helping our kids (and adults) navigate the Intelligence Age. But those emotions should not excessively steer our policy response to AI. Through close scrutiny of AI, we can make sure that policy is not resulting in unintended consequences, such as denying children the use of AI tools that could actually improve their physical and mental health.

The path to this more deliberate policy approach starts with combating the source of AI fear.

Fear of AI is often a response to the bogus claim that it’s beyond the control of humans. The core aspects of developing and deploying AI are the product of decisions made by people just like you and me. What data is available for AI training is subject to choices made by human actors. Laws often prevent certain data from being disclosed and later used for AI training. Technical systems can prevent data from being scraped from the Internet. Norms and business incentives influence what data even gets created and how it is stored and shared.

How and when AI companies release models is a function of human decisions. The structure of the AI market and the demand for AI products are variables that we can all shape, at least directly, through our representatives and purchasing decisions.

Integration of AI tools into sensitive contexts, such as schools and hospitals, is wholly a matter of human choices. Leaders and stakeholders of those institutions are anything but powerless when it comes to AI tool adoption. These folks are free to budget a lot or a little toward what AI tools they purchase. They can dictate what training, if any, their staff needs to receive before using those tools. They can impose strict procurement standards for any AI tools that can be acquired.

It’s very true that each of us has varying degrees of influence on how AI is developed and deployed, but it’s a dangerous myth that we’ve lost agency at this important societal juncture.

This recognition of our agency is a license to collectively build the tech we want to see, not a mandate to stop its development. A society that acts out of fear defaults to prohibition, sacrificing tangible progress to avoid speculative harms. It chooses scarcity. A confident society, by contrast, establishes the conditions for responsible innovation to flourish, viewing risk not as something to be eliminated, but as something to be managed intelligently in the pursuit of a more abundant future.

The most effective way to foster this environment is not through a new thicket of prescriptive regulations, but through the clarification and modernization of our existing laws and reliance on healthy, competitive markets. Adaptive laws and robust competition have successfully governed centuries of technological change and can do so in the age of AI.

This approach creates powerful incentives for developers to prioritize safety and reliability, not to satisfy a bureaucratic checklist, but because it is the surest path to success in the marketplace. When innovators have a clear understanding of their responsibilities, and consumers are confident that their rights are protected, progress can accelerate. This is the true alternative to a policy of fear: a legal system and marketplace that enables dynamism, demands responsibility, and is squarely focused on unleashing the immense benefits of innovation.


Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.


Read More

Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less
AI - Its Use, Misuse, and Regulation
Glowing ai chip on a circuit board.
Photo by Immo Wegmann on Unsplash

AI - Its Use, Misuse, and Regulation

There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.

First, the good uses. There is no question that AI can outthink human beings, regardless of how famous or knowledgeable, because of the amount of information it can process in a short amount of time. The most powerful accounts I've read have been in the field of medical research: doctors have fed facts into AI, asking for a diagnosis or a possible remedy, and AI has come up with remarkable answers beyond the human mind's capability.

Keep ReadingShow less