Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Self-driving cars: A tech miracle or a public safety threat?

white car

A self-driving car from Waymo and Jaguar moved through traffic in San Francisco in 2021.

Smith Collection/Gado/Getty Images

Hill was policy director for the Center for Humane Technology, co-founder of FairVote and political reform director at New America. You can reach him on X @StevenHill1776.

Will self-driving cars transform our transportation infrastructure? For several years we have been hearing that driverless vehicles will be taking over the streets, and that this transportation revolution will be a great thing for consumers as well as society.


Imagine, your own personal robot driver who picks you up and drops you off, 24/7. No more parking woes or falling asleep at the wheel. Leading companies like Tesla and Waymo have claimed that their robo cars are safer than human-driven vehicles. Waymo, which is a subsidiary of Google/Alphabet, says its vehicles have logged more than 7 million practice miles on public roads, and also 20 billion miles in “simulation.”

That sounds like a lot until you realize that there are 243 million licensed drivers in the United States who drive on average about 13,500 miles a year. That’s a total of 3.3 trillion miles driven every year.

Will any of these driverless services ever live up to the Silicon Valley hype? It’s one thing to test-drive on a track or a computer simulation, but the chaos and confusion of streets in the real world have proven to be a greater challenge than the brash entrepreneurs at Waymo or Tesla’s Elon Musk will admit. Now that the companies are required to report all accidents, it turns out there have been a lot more of them than the public knew.

Recently Waymo became the target of a new federal investigation by the National Highway Traffic Safety Administration. Its investigators flagged nearly two dozen recent incidents when Waymo's vehicles were involved in a collision or exhibiting erratic behavior. There have been increasing reports of collisions with stationary objects, such as gates and parked vehicles, as well as violations of traffic laws.

Waymo’s main competitor, Cruise, paused operations entirely after numerous incidents of erratic operation. That includes an incident in San Francisco that resulted in a pedestrian being dragged 20 feet by a Cruise vehicle, even as Cruise withheld video of the incident from regulators.

Previously an Uber robo vehicle killed a pedestrian, and San Francisco’s fire chief warned that driverless cars interfered with emergency vehicles nearly 40 times in 2023 alone. In April, the NHTSA opened investigations into collisions involving self-driving vehicles run by Amazon-owned Zoox, as well as partially automated driver-assist systems from Tesla and Ford.

NHTSA has identified at least 13 Tesla crashes involving one or more deaths while drivers were using Autopilot, and many more involving serious injuries.

Many members of the public are not happy about this new danger on city streets. In San Francisco, an angry crowd set fire to a Waymo driverless taxi just days after a Waymo car hit a bicyclist. Previously, a Waymo vehicle had struck and killed a dog.

So it looks like industry hype is outracing reality. Part of the issue is one of “trust.” The motto of Silicon Valley has always been “move fast and break things” and “apologize later.” When that reckless attitude gets applied to robo cars, it’s fair to ask whether it is OK that these companies act as if our streets are their laboratory and we’re their guinea pigs.

I enjoy my smartphone and a few techno-trinkets as much as anyone, and certainly many new technologies can bring welcome benefits. But I remember back in 2017 when several tech companies and investors revealed their latest shiny object — flying cars. Uber announced that it would be piloting an aerial taxi service in Los Angeles by 2020. At the time Uber was losing billions of dollars because it used predatory pricing to subsidize each ride as a way to monopolize the market and drive out competitors (including public transportation).

Yet the media lapped it up, even though Uber didn’t even have a working prototype for a service where the equivalent of a fender-bender in the air would be death. Unsurprisingly, the Jetsons’ taxi never took off.

Silicon Valley’s dirty little secret is that seven out of 10 start-ups fail and nine of 10 never earn a profit. Silicon Valley is a casino where investors roll the dice. So the entrepreneurs often feel pressured to sound like circus impresario P.T. Barnum trying to over-hype their latest show.

Don’t get me wrong, the fact that these vehicles can self-drive at all is a marvel. And Waymo counters that 40,000 people are killed by human-driven vehicles every year. But that’s misleading because it’s spread across three trillion miles. How will society decide the threshold for when robo cars are deemed safer than humans?

Maybe these companies should have to create a test city in the desert and experiment there. Currently, the limited abilities of robo cars make them suitable for a Disney World ride, or as shuttles on a university campus or industrial park where the vehicle could safely drive the same repetitive route. Or perhaps they can be used as long distance delivery trucks, which would only have to drive straight on an interstate, and at the city limits a human could drive it into the city.

Instead, regulators mostly have been hands-off, with California recently allowing scandal-plagued Waymo to expand in Los Angeles. The Waymo-ification of our streets seems to be proceeding against all common sense, even as its actual benefits remain elusive.


Read More

Man lying in his bed, on his phone at night.

As the 2026 election approaches, doomscrolling and social media are shaping voter behavior through fear and anxiety. Learn how digital news consumption influences political decisions—and how to break the cycle for more informed voting.

Getty Images, gorodenkoff

Americans Are Doomscrolling Their Way to the Ballot Box and Only Getting Empty Promises

As the spring primary cycle ramps up, voters are deciding which candidates to elect in the November general election, but too much doomscrolling on social media is leading to uninformed — and often anxiety-based — voting. Even though online platforms and politicians may be preying on our exhaustion to further their agendas, we don’t have to fall for it this election cycle.

Doomscrolling is, unfortunately, part of daily life for many of us. It involves consuming a virtually endless amount of negative social media posts and news content, causing us to feel scared and depressed. Our brains have a hardwired negativity bias that causes us to notice potential threats and focus on them. This is exacerbated by the fact that people who closely follow or participate in politics are more likely to doomscroll.

Keep ReadingShow less
The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less