Skip to content

Latest Stories

Top Stories

Avoiding disaster by mandating AI testing

Avoiding disaster by mandating AI testing
Getty Images

Kevin Frazier will join the Crump College of Law at St. Thomas University as an Assistant Professor starting this Fall. He currently is a clerk on the Montana Supreme Court.

Bad weather rarely causes a plane to crash — but the low probability of such a crash isn’t because nature lacks the power to send a plane woefully off course. In fact, as recently as 2009, a thunderstorm caused a crash resulting in 228 deaths.

Instead, two main factors explain why bad weather no longer poses an imminent threat to your longevity: first, we’ve improved our ability to detect storms. And, second and most importantly, we’ve acknowledged that the risks of flying through such storms isn’t worth it. The upshot is that when you don’t know where you’re going and if your plane can get you there, you should either stop or, if possible, postpone the trip until the path is in sight and the plane is flight worthy.

The leaders of AI look a lot like pilots flying through a thunderstorm — they can’t see where they’re headed and they’re unsure of the adequacy of their planes. Before a crash, we need to steer AI development out of the storm and onto a course where everyone, including the general public, can safely and clearly track its progress.

Despite everyone from Sam Altman, the CEO of OpenAI, to Rishi Sunak, the Prime Minister of the UK, acknowledging the existential risks posed by AI, some AI optimists are ignoring the warning lights and pushing for continued development. Take Reid Hoffman for example. Hoffman, the co-founder of LinkedIn, has been "engaged in an aggressive thought-leadership regimen to extol the virtues of A.I” in recent months in an attempt to push back against those raising redflags, according to The New York Times.

Hoffman and others are engaging in AI both-side-ism, arguing that though AI development may cause some harm, it will also create societally beneficial outcomes.The problem is that such an approach doesn’t weigh the magnitude of those goods and evils. And, according to individuals as tech savvy as Prime Minister Sunak, those evils may be quite severe. In other words, the good and bad of AI is not an apples-to-apples comparison -- it’s more akin to an apples to obliterated oranges situation (the latter referring to the catastrophic outcomes AI may lead to).

No one doubts that AI development in “clear skies” could bring about tremendous good.For instance, it’s delightful to think of a world in which AI replaces dangerous jobs and generates sufficient wealth to fund a universal basic income.The reality is that storm clouds have already gathered.The path to any sort of AI utopia is not only unclear but, more likely, unavailable.

Rather than keep AI development in the air during such conditions, we need to issue a sort of ground stop and test how well different AI tools can navigate the chaotic political, cultural, and economic conditions that define the modern era. This isn’t a call for a moratorium on AI development -- that’s already been called for (and ignored). Rather, it’s a call for test flights.

“Model evaluation” is the AI equivalent of such test flights. The good news is researchers such as Toby Shevlane and others have outlined specific ways for AI developers to use such evaluations to identify dangerous capabilities and measure the probability of AI tools to cause harm in application. Shevlane calls on AI developers to run these "test flights", to share their results with external researchers, and to have those results reviewed by an independent, external auditor to assess the safety of deploying an AI tool.

Test flights allow a handful of risk-loving people to try potentially dangerous technology in a controlled setting. Consider that back in 2010 one of Boeing's test flights of its 787 Dreamliner resulted in an onboard fire. Only after detecting and fixing such glitches did the plane become available for commercial use.

There’s a reason we only get on planes that have been tested and that have a fixed destination. We need to mandate test flights for AI development. We also need to determine where we expect AI to take us as a society. AI leaders may claim that it's on Congress to require such testing and planning, but the reality is that those leaders could and should self-impose such requirements.

The Wright Brothers did not force members of the public to test their planes — nor should AI developers.

Sign up for The Fulcrum newsletter

Read More

silhouettes of people arguing in front of an America flag
Pict Rider/Getty Images

'One side will win': The danger of zero-sum framings

Elwood is the author of “Defusing American Anger” and hosts thepodcast “People Who Read People.”

Recently, Supreme Court Justice Samuel Alito was surreptitiously recorded at a private event saying, about our political divides, that “one side or the other is going to win.” Many people saw this as evidence of his political bias. In The Washington Post, Perry Bacon Jr. wrote that he disagreed with Alito’s politics but that the justice was “right about the divisions in our nation today.” The subtitle of Bacon’s piece was: “America is in the middle of a nonmilitary civil war, and one side will win.”

It’s natural for people in conflict to see it in “us versus them” terms — as two opposing armies facing off against each other on the battlefield. That’s what conflict does to us: It makes us see things through war-colored glasses.

Keep ReadingShow less
Vladimir Putin and Donald Trump shaking hands

President Donald Trump and Russian President Vladimir Putin shake hands at the 2019 G20 summit in Oasaka, Japan.

Mikhail Svetlov/Getty Images

Trump is a past, present and future threat to national security

Corbin is professor emeritus of marketing at the University of Northern Iowa.

Psychological scientists who study human behavior concur that past actions are the best predictor of future actions. If past actions caused no problem, then all is well. If, however, a person demonstrated poor behavior in the past, well, buckle up. The odds are very great the person will continue to perform poorly if given the chance.

Donald Trump’s past behavior regarding just one area of protecting American citizens — specifically national defense — tells us that if he becomes the 47th president, we’re in a heap of trouble. Examining Trump’s past national security endeavors needs to be seriously examined by Americans before voting on Nov. 5.

Keep ReadingShow less
Caped person standing on a mountain top
RyanKing999/Getty Images

It takes a team

Molineaux is the lead catalyst for American Future, a research project that discovers what Americans prefer for their personal future lives. The research informs community planners with grassroots community preferences. Previously, Molineaux was the president/CEO of The Bridge Alliance.

We love heroic leaders. We admire heroes and trust them to tackle our big problems. In a way, we like the heroes to take care of those problems for us, relieving us of our citizen responsibilities. But what happens when our leaders fail us? How do we replace a heroic leader who has become bloated with ego? Or incompetent?

Heroic leaders are good for certain times and specific challenges, like uniting people against a common enemy. We find their charisma and inspiration compelling. They help us find our courage to tackle things together. We become a team, supporting the hero’s vision.

Keep ReadingShow less
Donald Trump

Former President Donald Trump attends the first day of the 2024 Republican National Convention at Milwaukee on July 15.

Robert Gauthier/Los Angeles Times via Getty Images

A presidential assassination attempt offers a time to reflect

Nye is the president and CEO of the Center for the Study of the Presidency and Congress and a former member of Congress from Virginia.

In the wake of an assassination attempt on an American presidential candidate, we are right to take a moment to reflect on the current trajectory of our politics, as we reject violence as an acceptable path and look for ways to cool the kinds of political rhetoric that might radicalize Americans to the point of normalizing brute force in our politics.

Even though the motivations of the July 13 shooter are yet unclear, it’s worth taking a moment to try to reset ourselves and make an earnest effort to listen to our better angels. However, unless we change the way we reward politicians in our electoral system, it is very likely that the opportunity of this moment to calm our politics will be lost, like many others before it.

Keep ReadingShow less
People standing near 4 American flags

American flags fly near Washington Monument.

Jakub Porzycki/NurPhoto via Getty Images

A personal note to America in troubled times

Harwood is president and founder of The Harwood Institute. This is the latest entry in his series based on the "Enough. Time to Build.” campaign, which calls on community leaders and active citizens to step forward and build together.

I wanted to address Americans after the attempted assassination of former President Donald Trump. Consider this a personal note directly to you (yes, you, the reader!). And know that I have intentionally held off in expressing my thoughts to allow things to settle a bit. There’s already too much noise enveloping our politics and lives.

Like most Americans, I am praying for the former president, his family and all those affected by last weekend’s events. There is no room for political violence in our nation.

Keep ReadingShow less