Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Self-driving cars: A tech miracle or a public safety threat?

white car

A self-driving car from Waymo and Jaguar moved through traffic in San Francisco in 2021.

Smith Collection/Gado/Getty Images

Hill was policy director for the Center for Humane Technology, co-founder of FairVote and political reform director at New America. You can reach him on X @StevenHill1776.

Will self-driving cars transform our transportation infrastructure? For several years we have been hearing that driverless vehicles will be taking over the streets, and that this transportation revolution will be a great thing for consumers as well as society.


Imagine, your own personal robot driver who picks you up and drops you off, 24/7. No more parking woes or falling asleep at the wheel. Leading companies like Tesla and Waymo have claimed that their robo cars are safer than human-driven vehicles. Waymo, which is a subsidiary of Google/Alphabet, says its vehicles have logged more than 7 million practice miles on public roads, and also 20 billion miles in “simulation.”

That sounds like a lot until you realize that there are 243 million licensed drivers in the United States who drive on average about 13,500 miles a year. That’s a total of 3.3 trillion miles driven every year.

Will any of these driverless services ever live up to the Silicon Valley hype? It’s one thing to test-drive on a track or a computer simulation, but the chaos and confusion of streets in the real world have proven to be a greater challenge than the brash entrepreneurs at Waymo or Tesla’s Elon Musk will admit. Now that the companies are required to report all accidents, it turns out there have been a lot more of them than the public knew.

Recently Waymo became the target of a new federal investigation by the National Highway Traffic Safety Administration. Its investigators flagged nearly two dozen recent incidents when Waymo's vehicles were involved in a collision or exhibiting erratic behavior. There have been increasing reports of collisions with stationary objects, such as gates and parked vehicles, as well as violations of traffic laws.

Waymo’s main competitor, Cruise, paused operations entirely after numerous incidents of erratic operation. That includes an incident in San Francisco that resulted in a pedestrian being dragged 20 feet by a Cruise vehicle, even as Cruise withheld video of the incident from regulators.

Previously an Uber robo vehicle killed a pedestrian, and San Francisco’s fire chief warned that driverless cars interfered with emergency vehicles nearly 40 times in 2023 alone. In April, the NHTSA opened investigations into collisions involving self-driving vehicles run by Amazon-owned Zoox, as well as partially automated driver-assist systems from Tesla and Ford.

NHTSA has identified at least 13 Tesla crashes involving one or more deaths while drivers were using Autopilot, and many more involving serious injuries.

Many members of the public are not happy about this new danger on city streets. In San Francisco, an angry crowd set fire to a Waymo driverless taxi just days after a Waymo car hit a bicyclist. Previously, a Waymo vehicle had struck and killed a dog.

So it looks like industry hype is outracing reality. Part of the issue is one of “trust.” The motto of Silicon Valley has always been “move fast and break things” and “apologize later.” When that reckless attitude gets applied to robo cars, it’s fair to ask whether it is OK that these companies act as if our streets are their laboratory and we’re their guinea pigs.

I enjoy my smartphone and a few techno-trinkets as much as anyone, and certainly many new technologies can bring welcome benefits. But I remember back in 2017 when several tech companies and investors revealed their latest shiny object — flying cars. Uber announced that it would be piloting an aerial taxi service in Los Angeles by 2020. At the time Uber was losing billions of dollars because it used predatory pricing to subsidize each ride as a way to monopolize the market and drive out competitors (including public transportation).

Yet the media lapped it up, even though Uber didn’t even have a working prototype for a service where the equivalent of a fender-bender in the air would be death. Unsurprisingly, the Jetsons’ taxi never took off.

Silicon Valley’s dirty little secret is that seven out of 10 start-ups fail and nine of 10 never earn a profit. Silicon Valley is a casino where investors roll the dice. So the entrepreneurs often feel pressured to sound like circus impresario P.T. Barnum trying to over-hype their latest show.

Don’t get me wrong, the fact that these vehicles can self-drive at all is a marvel. And Waymo counters that 40,000 people are killed by human-driven vehicles every year. But that’s misleading because it’s spread across three trillion miles. How will society decide the threshold for when robo cars are deemed safer than humans?

Maybe these companies should have to create a test city in the desert and experiment there. Currently, the limited abilities of robo cars make them suitable for a Disney World ride, or as shuttles on a university campus or industrial park where the vehicle could safely drive the same repetitive route. Or perhaps they can be used as long distance delivery trucks, which would only have to drive straight on an interstate, and at the city limits a human could drive it into the city.

Instead, regulators mostly have been hands-off, with California recently allowing scandal-plagued Waymo to expand in Los Angeles. The Waymo-ification of our streets seems to be proceeding against all common sense, even as its actual benefits remain elusive.


Read More

Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less
Meta Undermining Trust but Verify through Paid Links
Facebook launches voting resource tool
Facebook launches voting resource tool

Meta Undermining Trust but Verify through Paid Links

Facebook is testing limits on shared external links, which would become a paid feature through their Meta Verified program, which costs $14.99 per month.

This change solidifies that verification badges are now meaningless signifiers. Yet it wasn’t always so; the verified internet was built to support participation and trust. Beginning with Twitter’s verification program launched in 2009, a checkmark next to a username indicated that an account had been verified to represent a notable person or official account for a business. We could believe that an elected official or a brand name was who they said they were online. When Twitter Blue, and later X Premium, began to support paid blue checkmarks in November of 2022, the visual identification of verification became deceptive. Think Fake Eli Lilly accounts posting about free insulin and impersonation accounts for Elon Musk himself.

This week’s move by Meta echoes changes at Twitter/X, despite the significant evidence that it leaves information quality and user experience in a worse place than before. Despite what Facebook says, all this tells anyone is that you paid.

Keep ReadingShow less
artificial intelligence

Rather than blame AI for young Americans struggling to find work, we need to build: build new educational institutions, new retraining and upskilling programs, and, most importantly, new firms.

Surasak Suwanmake/Getty Images

Blame AI or Build With AI? Only One Approach Creates Jobs

We’re failing young Americans. Many of them are struggling to find work. Unemployment among 16- to 24-year-olds topped 10.5% in August. Even among those who do find a job, many of them are settling for lower-paying roles. More than 50% of college grads are underemployed. To make matters worse, the path forward to a more stable, lucrative career is seemingly up in the air. High school grads in their twenties find jobs at nearly the same rate as those with four-year degrees.

We have two options: blame or build. The first involves blaming AI, as if this new technology is entirely to blame for the current economic malaise facing Gen Z. This course of action involves slowing or even stopping AI adoption. For example, there’s so-called robot taxes. The thinking goes that by placing financial penalties on firms that lean into AI, there will be more roles left to Gen Z and workers in general. Then there’s the idea of banning or limiting the use of AI in hiring and firing decisions. Applicants who have struggled to find work suggest that increased use of AI may be partially at fault. Others have called for providing workers with a greater say in whether and to what extent their firm uses AI. This may help firms find ways to integrate AI in a way that augments workers rather than replace them.

Keep ReadingShow less
Parv Mehta Is Leading the Fight Against AI Misinformation

A visual representation of deep fake and disinformation concepts, featuring various related keywords in green on a dark background, symbolizing the spread of false information and the impact of artificial intelligence.

Getty Images

Parv Mehta Is Leading the Fight Against AI Misinformation

At a moment when the country is grappling with the civic consequences of rapidly advancing technology, Parv Mehta stands out as one of the most forward‑thinking young leaders of his generation. Recognized as one of the 500 Gen Zers named to the 2025 Carnegie Young Leaders for Civic Preparedness cohort, Mehta represents the kind of grounded, community‑rooted innovator the program was designed to elevate.

A high school student from Washington state, Parv has emerged as a leading youth voice on the dangers of artificial intelligence and deepfakes. He recognized early that his generation would inherit a world where misinformation spreads faster than truth—and where young people are often the most vulnerable targets. Motivated by years of computer science classes and a growing awareness of AI’s risks, he launched a project to educate students across Washington about deepfake technology, media literacy, and digital safety.

Keep ReadingShow less