Skip to content
Search

Latest Stories

Top Stories

Avoiding disaster by mandating AI testing

Avoiding disaster by mandating AI testing
Getty Images

Kevin Frazier will join the Crump College of Law at St. Thomas University as an Assistant Professor starting this Fall. He currently is a clerk on the Montana Supreme Court.

Bad weather rarely causes a plane to crash — but the low probability of such a crash isn’t because nature lacks the power to send a plane woefully off course. In fact, as recently as 2009, a thunderstorm caused a crash resulting in 228 deaths.

Instead, two main factors explain why bad weather no longer poses an imminent threat to your longevity: first, we’ve improved our ability to detect storms. And, second and most importantly, we’ve acknowledged that the risks of flying through such storms isn’t worth it. The upshot is that when you don’t know where you’re going and if your plane can get you there, you should either stop or, if possible, postpone the trip until the path is in sight and the plane is flight worthy.

The leaders of AI look a lot like pilots flying through a thunderstorm — they can’t see where they’re headed and they’re unsure of the adequacy of their planes. Before a crash, we need to steer AI development out of the storm and onto a course where everyone, including the general public, can safely and clearly track its progress.

Despite everyone from Sam Altman, the CEO of OpenAI, to Rishi Sunak, the Prime Minister of the UK, acknowledging the existential risksposed by AI, some AI optimists are ignoring the warning lights and pushing for continued development. Take Reid Hoffman for example. Hoffman, the co-founder of LinkedIn, has been "engaged in an aggressive thought-leadership regimen to extol the virtues of A.I” in recent months in an attempt to push back against those raising redflags, according to The New York Times.

Hoffman and others are engaging in AI both-side-ism, arguing that though AI development may cause some harm, it will also create societally beneficial outcomes.The problem is that such an approach doesn’t weigh the magnitude of those goods and evils. And, according to individuals as tech savvy as Prime Minister Sunak, those evils may be quite severe. In other words, the good and bad of AI is not an apples-to-apples comparison -- it’s more akin to an apples to obliterated oranges situation (the latter referring to the catastrophic outcomes AI may lead to).

No one doubts that AI development in “clear skies” could bring about tremendous good.For instance, it’s delightful to think of a world in which AI replaces dangerous jobs and generates sufficient wealth to fund a universal basic income.The reality is that storm clouds have already gathered.The path to any sort of AI utopia is not only unclear but, more likely, unavailable.

Rather than keep AI development in the air during such conditions, we need to issue a sort of ground stop and test how well different AI tools can navigate the chaotic political, cultural, and economic conditions that define the modern era. This isn’t a call for a moratorium on AI development -- that’s already been called for (and ignored). Rather, it’s a call for test flights.

“Model evaluation” is the AI equivalent of such test flights. The good news is researchers such as Toby Shevlane and others have outlined specific ways for AI developers to use such evaluations to identify dangerous capabilities and measure the probability of AI tools to cause harm in application. Shevlane calls on AI developers to run these "test flights", to share their results with external researchers, and to have those results reviewed by an independent, external auditor to assess the safety of deploying an AI tool.

Test flights allow a handful of risk-loving people to try potentially dangerous technology in a controlled setting. Consider that back in 2010 one of Boeing's test flights of its 787 Dreamliner resulted in an onboard fire. Only after detecting and fixing such glitches did the plane become available for commercial use.

There’s a reason we only get on planes that have been tested and that have a fixed destination. We need to mandate test flights for AI development. We also need to determine where we expect AI to take us as a society. AI leaders may claim that it's on Congress to require such testing and planning, but the reality is that those leaders could and should self-impose such requirements.

The Wright Brothers did not force members of the public to test their planes — nor should AI developers.

Read More

Mary Kenion on Homelessness: Policy, Principles, and Solutions
man lying on brown cardboard box
Photo by Jon Tyson on Unsplash

Mary Kenion on Homelessness: Policy, Principles, and Solutions

I had the opportunity to speak with Mary Kenion, the Chief Equity Officer at the National Alliance to End Homelessness. The NAEH, in her words, is a non-profit organization with a “deceptively simple mission; to end homelessness in America.” We discussed the trends in policy that potentially could worsen the crisis, in relation to Medicaid, and the recent Executive Order regarding vagrancy and the mentally ill, and, finally, why this should matter as practical policy and how this reflects our national character and moral principles.

The NAEH cooperates with specialists to guide research efforts and serve in leadership roles; they also have a team of “lived experience advisors.”

Keep ReadingShow less
Princeton Gerrymandering Project Gives California Prop 50 an ‘F’
Independent Voter News

Princeton Gerrymandering Project Gives California Prop 50 an ‘F’

The special election for California Prop 50 wraps up November 4 and recent polling shows the odds strongly favor its passage. The measure suspends the state’s independent congressional map for a legislative gerrymander that Princeton grades as one of the worst in the nation.

The Princeton Gerrymandering Project developed a “Redistricting Report Card” that takes metrics of partisan and racial performance data in all 50 states and converts it into a grade for partisan fairness, competitiveness, and geographic features.

Keep ReadingShow less
A teacher passing out papers to students in a classroom.

California’s teacher shortage highlights inequities in teacher education. Supporting and retaining teachers of color starts with racially just TEPs.

Getty Images, Maskot

There’s a Shortage of Teachers of Color—Support Begins in Preservice Education

The LAist reported a shortage of teachers in Southern California, and especially a shortage of teachers of color. In California, almost 80% of public school students are students of color, while 64.4% of teachers are white. (Nationally, 80% of teachers are white, and over 50% of public school students are of color.) The article suggests that to support and retain teachers requires an investment in teacher candidates (TCs), mostly through full funding given that many teachers can’t afford such costly fast paced teacher education programs (TEPs), where they have no time to work for extra income. Ensuring affordability for these programs to recruit and sustain teachers, and especially teachers of color, is absolutely critical, but TEPs must consider additional supports, including culturally relevant curriculum, faculty of color they can trust and space for them to build community among themselves.

Hundreds of thousands of aspiring teachers enroll in TEPs, yet preservice teachers of color are a clear minority. A study revealed that 48 U.S. states and Washington, D.C have higher percentages of white TCs than they do white public-school students. Furthermore, in 35 of the programs that had enrollment of 400 or more, 90% of enrollees were white. Scholar Christine Sleeter declared an “overwhelming presence of whiteness” in teacher education and expert Cheryl Matias discussed how TEPs generate “emotionalities of whiteness,” meaning feelings such as guilt and defensiveness in white people, might result in people of color protecting white comfort instead of addressing the root issues and manifestations of racism.

Keep ReadingShow less
An illustration of a megaphone with a speech bubble.

As threats to democracy rise, Amherst College faculty show how collective action and courage within institutions can defend freedom and the rule of law.

Getty Images, Richard Drury

A Small College Faculty Takes Unprecedented Action to Stand Up for Democracy

In the Trump era, most of the attention on higher education has focused on presidents and what they will or won't do to protect their institutions from threats to academic freedom and institutional independence. Leadership matters, but it's time for the rank-and-file in the academy — and in business and other institutions — to fulfill their own obligations to protect democracy.

With a few exceptions, neither the rank and file nor their leaders in the academy have stood up for democracy and the rule of law in the world beyond their organizations. They have had little to say about the administration’s mounting lawlessness, corruption, and abuse of power.

Keep ReadingShow less