Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Avoiding disaster by mandating AI testing

Avoiding disaster by mandating AI testing
Getty Images

Kevin Frazier will join the Crump College of Law at St. Thomas University as an Assistant Professor starting this Fall. He currently is a clerk on the Montana Supreme Court.

Bad weather rarely causes a plane to crash — but the low probability of such a crash isn’t because nature lacks the power to send a plane woefully off course. In fact, as recently as 2009, a thunderstorm caused a crash resulting in 228 deaths.

Instead, two main factors explain why bad weather no longer poses an imminent threat to your longevity: first, we’ve improved our ability to detect storms. And, second and most importantly, we’ve acknowledged that the risks of flying through such storms isn’t worth it. The upshot is that when you don’t know where you’re going and if your plane can get you there, you should either stop or, if possible, postpone the trip until the path is in sight and the plane is flight worthy.

The leaders of AI look a lot like pilots flying through a thunderstorm — they can’t see where they’re headed and they’re unsure of the adequacy of their planes. Before a crash, we need to steer AI development out of the storm and onto a course where everyone, including the general public, can safely and clearly track its progress.

Despite everyone from Sam Altman, the CEO of OpenAI, to Rishi Sunak, the Prime Minister of the UK, acknowledging the existential risksposed by AI, some AI optimists are ignoring the warning lights and pushing for continued development. Take Reid Hoffman for example. Hoffman, the co-founder of LinkedIn, has been "engaged in an aggressive thought-leadership regimen to extol the virtues of A.I” in recent months in an attempt to push back against those raising redflags, according to The New York Times.

Hoffman and others are engaging in AI both-side-ism, arguing that though AI development may cause some harm, it will also create societally beneficial outcomes.The problem is that such an approach doesn’t weigh the magnitude of those goods and evils. And, according to individuals as tech savvy as Prime Minister Sunak, those evils may be quite severe. In other words, the good and bad of AI is not an apples-to-apples comparison -- it’s more akin to an apples to obliterated oranges situation (the latter referring to the catastrophic outcomes AI may lead to).

No one doubts that AI development in “clear skies” could bring about tremendous good.For instance, it’s delightful to think of a world in which AI replaces dangerous jobs and generates sufficient wealth to fund a universal basic income.The reality is that storm clouds have already gathered.The path to any sort of AI utopia is not only unclear but, more likely, unavailable.

Rather than keep AI development in the air during such conditions, we need to issue a sort of ground stop and test how well different AI tools can navigate the chaotic political, cultural, and economic conditions that define the modern era. This isn’t a call for a moratorium on AI development -- that’s already been called for (and ignored). Rather, it’s a call for test flights.

“Model evaluation” is the AI equivalent of such test flights. The good news is researchers such as Toby Shevlane and others have outlined specific ways for AI developers to use such evaluations to identify dangerous capabilities and measure the probability of AI tools to cause harm in application. Shevlane calls on AI developers to run these "test flights", to share their results with external researchers, and to have those results reviewed by an independent, external auditor to assess the safety of deploying an AI tool.

Test flights allow a handful of risk-loving people to try potentially dangerous technology in a controlled setting. Consider that back in 2010 one of Boeing's test flights of its 787 Dreamliner resulted in an onboard fire. Only after detecting and fixing such glitches did the plane become available for commercial use.

There’s a reason we only get on planes that have been tested and that have a fixed destination. We need to mandate test flights for AI development. We also need to determine where we expect AI to take us as a society. AI leaders may claim that it's on Congress to require such testing and planning, but the reality is that those leaders could and should self-impose such requirements.

The Wright Brothers did not force members of the public to test their planes — nor should AI developers.


Read More

Paul Ehrlich was wrong about everything

Crowd of people walking on a street.

Andy Andrews//Getty Images

Paul Ehrlich was wrong about everything

Biologist and author Paul Ehrlich, the most influential Chicken Little of the last century, died at the age of 93 this week. His 1968 book, “The Population Bomb,” launched decades of institutional panic in government, entertainment and journalism.

Ehrlich’s core neo-Malthusian argument was that overpopulation would exhaust the supply of food and natural resources, leading to a cascade of catastrophes around the world. “The Population Bomb” opens with a bold prediction, “The battle to feed all of humanity is over. In the 1970s and 1980s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.”

Keep ReadingShow less
Bravado Isn’t a Strategy: Why the Iran War Has No Endgame

People clear rubble in a house in the Beryanak District after it was damaged by missile attacks two days before, on March 15, 2026 in Tehran, Iran. The United States and Israel continued their joint attack on Iran that began on February 28. Iran retaliated by firing waves of missiles and drones at Israel, and targeting U.S. allies in the region.

Getty Images, Majid Saeedi

Bravado Isn’t a Strategy: Why the Iran War Has No Endgame

Most of what we have heard from the administration as it pertains to the Iran War is swagger and bro-talk. A few days into the war, the White House released a social media video that combined footage of the bombardment with clips from video games. Not long after, it released a second video, titled “Justice the American Way,” that mixed images of the U.S. military with scenes from movies like Gladiator and Top Gun Maverick.

Speaking to reporters at the Pentagon, War Secretary Pete Hegseth boasted of “death and destruction from the sky all day long.” “They are toast, and they know it,” he said. “This was never meant to be a fair fight... we are punching them while they’re down.”

Keep ReadingShow less
A student in uniform walking through a campus.

A Reserve Officer Training Corps (ROTC) cadet walks through campus November 7, 2003 in Princeton, New Jersey.

Getty Images, Spencer Platt

Hegseth is Dumbing Down the Military (on Purpose)

One day before the United States began an ill-defined and illegal war of indefinite length with Iran, Pete Hegseth angrily attacked a different enemy: the Ivy League. The Secretary of War denounced Ivy League universities as "woke breeding grounds of toxic indoctrination” and then eliminated long-standing college fellowship programs with more than a dozen elite colleges, which had historically served as a pipeline for service members to the upper ranks of military leadership. Of the schools now on Hegseth’s "no-fly list," four sit in the top ten of the World’s Top Universities for 2026. So, why does the Secretary of War not want his armed forces to have the best education available? Because he wants a military without a brain.

For a guy obsessed with being the strongest and most lethal force in the world, cutting access to world-class schools is a bizarre gambit. It does reveal Hegseth doesn’t consider intelligence a factor–let alone an asset–in strength or lethality. That tracks. Hegseth alleges the Ivies infect officers with “globalist and radical ideologies that do not improve our fighting ranks…” God forbid the tip of the sword of our foreign policy has knowledge of international cooperation and global interconnectedness. The Ivy League has its own issues, but the Pentagon’s claim that they "fail to deliver rigorous education grounded in realism” is almost laughable. I’m a veteran Lieutenant Commander with two Ivy League degrees, both paid for with military tuition assistance, and I promise: it was rigorous. Meanwhile, are Hegseth’s performative politics grounded in reality? Attacking Harvard on social media the eve of initiating a new war with a foreign adversary is disgraceful, and even delusional.

Keep ReadingShow less
Are We Prepared for a World Where AI Isn’t at Work?
Person working at a desk with a laptop and books.

Are We Prepared for a World Where AI Isn’t at Work?

Draft an important email without using AI. Write it from scratch — no suggestions, no autocomplete, and no prompt to ChatGPT to compose or revise the email.

Now ask yourself: Did it feel slower? Harder? Slightly uncomfortable?

Keep ReadingShow less