Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The Biggest Obstacle to Safer Roads Isn't Technology, It's Politics

Opinion

The Biggest Obstacle to Safer Roads Isn't Technology, It's Politics

A 3D generated image of modern vehicles with AI assistance.

Getty Images, gremlin

Let’s be honest: does driving feel safe anymore? Ask anyone navigating the daily commute, especially in notoriously chaotic places like Miami, and you’ll likely hear a frustrated, perhaps even expletive-laden, "No!" That gut feeling isn't paranoia; it's backed by grim statistics. Over 200 people died on Travis County roads in 2023, according to Vision Zero ATX. Nationally, tens of thousands perish in preventable crashes. It's a relentless public health crisis we've somehow numbed ourselves to, with a staggering cost measured in shattered families and lost potential.

But imagine a different reality, one where that daily fear evaporates. What if I told you that the technology to dramatically reduce this carnage isn't science fiction but sitting right under our noses? Autonomous vehicles (AVs), or self-driving cars, are here and rapidly improving. Leveraging breakthroughs in AI, these vehicles are increasingly outperforming human drivers, proving to be significantly less likely to cause accidents, especially those resulting in injury. Studies suggest that replacing human drivers with AVs could drastically cut road fatalities. Even achieving just 10% AV penetration on our roads might improve traffic safety by as much as 50%, with those gains likely to grow exponentially as the technology becomes more sophisticated and widespread.


The benefits extend far beyond preventing crashes. Many AVs are electric or designed for fuel efficiency, promising cleaner air. They can reduce frustrating traffic congestion by communicating and coordinating movement. Perhaps most profoundly, AVs offer the potential for unprecedented mobility and freedom for millions—the elderly who can no longer drive safely, people with disabilities who face transportation barriers, or even just reclaiming hours lost to stressful commutes.

Given this potential, you'd think we'd be rolling out the red carpet for AVs. Private companies are certainly betting big, pouring billions into research and development. Fleets of robotaxis are already operating, albeit cautiously, in cities across the country such as Austin, Miami, and San Francisco. Yet, the transformative leap—widespread adoption that truly moves the needle on national safety statistics—remains frustratingly out of reach. Why the delay?

Ironically, the biggest roadblocks aren't primarily technological anymore. They are political, regulatory, and societal. We currently face a chaotic mess of differing state and local AV regulations—a regulatory traffic jam that makes large-scale deployment a nightmare. How can a car designed to cross state lines operate effectively if the definition of "driver" or the rules for operation change every few hundred miles? This regulatory uncertainty chills investment and forces companies into limited, geographically constrained testing, which slows down the learning process that is essential for improving AV safety and reliability across all driving conditions. Add to this a healthy dose of public skepticism that is often fueled by unfamiliarity and fear that is amplified by a media focus on glitches rather than the millions of safely driven miles.

This is precisely where government leadership becomes critical. And I argue it's not just a good idea; it's a constitutional obligation. The federal government has a fundamental duty, rooted in the Constitution itself, to actively promote technologies that significantly advance public safety and well-being.

This duty isn't theoretical; it's embedded in the very DNA of our nation. The Constitution's Preamble explicitly states a core purpose to "promote the general Welfare". This wasn't just hopeful rhetoric. The Founders drafted the Constitution because the previous government under the Articles of Confederation was demonstrably ineffective—unable to manage national defense, economic stability, or even internal order. They intentionally created a stronger federal government capable of tackling big, collective problems for the common good. This implies what some scholars call a "right to effective government"—a right to expect our government to use its powers competently to protect us and improve our lives, especially when individual or market actions fall short.

Protecting citizens from widespread, preventable harm like mass traffic fatalities falls squarely within this duty. We've seen the government fulfill this role before. Remember the fight over seat belts? Initially appearing in the 1930s, they faced decades of resistance from manufacturers arguing that "safety didn't sell" and from public pushback against the mandates. It took years of advocacy and eventual government action—federal standards pushing states to enact laws—to make seat belts ubiquitous. The delay undoubtedly cost countless lives. Today, nearly 375,000 lives have been saved since 1975 thanks to those belts.

Federal inaction on AVs today risks repeating that tragic history, sacrificing safety on the altar of regulatory timidity. The National Highway Traffic Safety Administration (NHTSA) has the explicit authority to set Federal Motor Vehicle Safety Standards (FMVSS) to protect the public from "unreasonable risk". It’s time they used that authority to create clear, uniform, national standards for AVs, providing the roadmap that the automotive industry needs.

Now, let's acknowledge the real concerns surrounding AVs. Data privacy is paramount—cars packed with sensors could collect vast amounts of personal information. We need robust regulations to ensure this data isn't misused or exploited. The transition will inevitably impact jobs, particularly in trucking and transportation, requiring proactive policies for worker retraining and support. Ensuring equitable deployment is vital, so AV benefits can reach rural communities and lower-income individuals, not just affluent city dwellers. Environmental impacts also need careful management to ensure that AVs lead to a net reduction in emissions. These are serious challenges requiring thoughtful, proactive policy responses that are developed *alongside* AV deployment, not as barriers to it. They do not, however, outweigh the moral imperative to prevent tens of thousands of deaths each year.

We face a clear choice. We can continue down the current path, accepting the horrific and largely preventable toll of human driving errors as well as allowing fragmented regulations and unfounded fears to stall progress indefinitely. Or, we can embrace the promise of AV technology and demand our government fulfill its most basic constitutional obligation: to act effectively to safeguard our lives and promote general welfare. This requires decisive federal leadership now—setting clear national standards, facilitating safe and widespread testing to build public trust, and creating policies that manage the transition responsibly. The technology to save these lives is within reach. It’s time our policies caught up.


This is a summary of "A Constitutional Mandate to Adopt AVs," originally published in the Washington and Lee Law Review Online.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.


Read More

Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less
Trump Administration’s Escalating Attacks on Media Raise Concerns about Trust in Media, Self-Censorship

U.S. President Donald Trump speaks to reporters before boarding Air Force One at Palm Beach International Airport on March 23, 2026 in West Palm Beach, Florida.

(Photo by Roberto Schmidt/Getty Images)

Trump Administration’s Escalating Attacks on Media Raise Concerns about Trust in Media, Self-Censorship

WASHINGTON – Independent journalist Georgia Fort filmed federal agents outside of her home on Jan. 30. They were coming to arrest her in connection with reporting and filming at an anti-ICE protest in Minneapolis, Minn., almost two weeks prior.

“I don’t feel like I have my First Amendment right as a member of the press,” said Fort in video footage shared with CNN.

Keep ReadingShow less