Skip to content
Search

Latest Stories

Top Stories

AI could help remove bias from medical research and data

Researcher looks at mammography test

Artificial intelligence can help root out racial bias in health care, but only if the programmers can create the software so it doesn't make the same mistakes people make, like misreading mammograms results, writes Pearl.

Anne-Christine Poujoulat/AFP via Getty Images
Pearl is a clinical professor of plastic surgery at the Stanford University School of Medicine and is on the faculty of the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.

This is the second entry in a two-part op-ed series on institutional racism in American medicine.

A little over a year before the coronavirus pandemic reached our shores, the racism problem in U.S. health care was making big headlines.

But it wasn't doctors or nurses being accused of bias. Rather, a study published in Science concluded that a predictive health care algorithm had, itself, discriminated against Black patients.

The story originated with Optum, a subsidiary of insurance giant UnitedHealth Group, which had designed an application to identify high-risk patients with untreated chronic diseases. The company's ultimate goal was to help re-distribute medical resources to those who'd benefit most from added care. And to figure out who was most in need, Optum's algorithm assessed the cost of each patient's past treatments.

Unaccounted for in the algorithm's design was this essential fact: The average Black patient receives $1,800 less per year in total medical care than a white person with the same set of health problems. And, sure enough, when the researchers went back and re-ranked patients by their illnesses (rather than the cost of their care), the percentage of Black patients who should have been enrolled in specialized care programs jumped from 18 percent to 47 percent.

Sign up for The Fulcrum newsletter

Journalists and commentators pinned the blame for racial bias on Optum's algorithm. In reality, technology wasn't the problem. At issue were the doctors who failed to provide sufficient medical care to the Black patients in the first place. Meaning, the data was faulty because humans failed to provide equitable care.

Artificial intelligence and algorithmic approaches can only be as accurate, reliable and helpful as the data they're given. If the human inputs are unreliable, the data will be, as well.

Let's use the identification of breast cancer as an example. As much as one-third of the time, two radiologists looking at the same mammogram will disagree on the diagnosis. Therefore, if AI software were programmed to act like humans, the technology would be wrong one-third of the time.

Instead, AI can store and compare tens of thousands of mammogram images — comparing examples of women with cancer and without — to detect hundreds of subtle differences that humans often overlook. It can remember all those tiny differences when reviewing new mammograms, which is why AI is already estimated to be 10 percent more accurate than the average radiologist.

What AI can't recognize is whether it's being fed biased or incorrect information. Adjusting for bias in research and data aggregation requires that humans acknowledge their faulty assumptions and decisions, and then modify the inputs accordingly.

Correcting these types of errors should be standard practice by now. After all, any research project that seeks funding and publication is required to include an analysis of potential bias, based on the study's participants. As an example, investigators who want to compare people's health in two cities would be required to modify the study's design if they failed to account for major differences in age, education or other factors that might inappropriately tilt the results.

Given how often data is flawed, the possibility of racial bias should be explicitly factored into every AI project. With universities and funding agencies increasingly focused on racial issues in medicine, this expectation has the potential to become routine in the future. Once it is, AI will force researchers to confront bias in health care. As a result, the conclusions and recommendations they provide will be more accurate and equitable.

Thirteen months into the pandemic, Covid-19 continues to kill Black individuals at a rate three times higher than white people. For years, health plans and hospital leaders have talked about the need to address health disparities like these. And yet, despite good intentions, the solutions they put forth always look a lot like the failed efforts of the past.

Addressing systemic racism in medicine requires that we analyze far more data (all at once) than we do today. AI is the perfect application for this task. What we need is a national commitment to use these types of technologies to answer medicine's most urgent questions.

There is no antidote to the problem of racism in medicine. But combining AI with a national commitment to root out bias in health care would be a good start, putting our medical system on a path toward antiracism.

Read More

U.S. President Donald Trump walks towards Marine One on the South Lawn on May 1, 2025 in Washington, DC.

U.S. President Donald Trump walks towards Marine One on the South Lawn on May 1, 2025 in Washington, DC.

Getty Images, Andrew Harnik

Trump’s First 100 Days on Trial

100 Days, 122 Rulings

Presidents are typically evaluated by their accomplishments in the first 100 days. Donald Trump's second term stands out for a different reason: the unprecedented number of executive actions challenged and blocked by the courts. In just over three months, Trump issued more than 200 executive orders, targeting areas such as climate policy, civil service regulations, immigration, and education funding.

However, the most telling statistic is not the volume of orders but the judiciary's response: over 120 rulings have paused or invalidated these directives. This positions the courts, rather than Congress, as the primary institutional check on the administration's agenda. With a legislature largely aligned with the executive, the judiciary has become a critical counterbalance. The sustainability of this dynamic raises questions about the resilience of democratic institutions when one branch shoulders the burden of oversight responsibilities.

Keep ReadingShow less
U.S. President Donald Trump signs executive orders in the Oval Office at the White House on April 23, 2025 in Washington, DC.

U.S. President Donald Trump signs executive orders in the Oval Office at the White House on April 23, 2025 in Washington, DC.

Getty Images, Chip Somodevilla

Trump 2.0’s Alleged Trifecta Crisis

On July 25, 1933, President Franklin D. Roosevelt gave a radio address to 125 million Americans in which he coined the term “first 100 days.” Today, the 100th day of a presidency is considered a benchmark to measure the early success or failure of a president.

Mr. Trump’s 100th day of office lands on April 30, when the world has witnessed his 137 executive orders, 39 proclamations, 36 memoranda, a few Cabinet meetings, and numerous press briefings. In summary, Trump’s cabinet appointments and seemingly arbitrary, capricious, ad hoc, and erratic actions have created turmoil in the stock market, utter confusion among our international trade partners, and confounded unrest with consumers, workers, small business owners, and corporate CEOs.

Keep ReadingShow less
America’s Liz Truss Problem

Former Prime Minister of the United Kingdom Liz Truss speaks at the Conservative Political Action Conference (CPAC) at the Gaylord National Resort Hotel And Convention Center on February 20, 2025 in Oxon Hill, Maryland.

Getty Images, Andrew Harnik

America’s Liz Truss Problem

America is having a Liz Truss moment. The problem is that America doesn’t have a Liz Truss solution.

Let me take you back to the fall of 2022 when the United Kingdom experienced its own version of political whiplash. In the span of seven weeks, no less than three Prime Ministers (and two monarchs, incidentally) tried to steer the British governmental ship. On September 6, Boris Johnson was forced to resign over a seemingly endless series of scandals. Enter Liz Truss. She lasted forty-nine days, until October 25, when she too was pushed out the black door of 10 Downing Street. Her blunder? Incompetence. Rishi Sunak, the Conservative Party’s third choice, then measured the drapes.

What most people remember of the Truss premiership is the Daily Star wager that a head of lettuce would last longer than Truss. The lettuce won. But Truss’ stint as Prime Minister—the shortest ever, I should note—holds some lessons for America today.

Keep ReadingShow less
Employees being let go, laid off, fired.
Getty Images, mathisworks

Part One, The Impact of Trump’s Executive Actions: The Federal Workforce

Project Overview

This essay is part of a series by Lawyers Defending American Democracy, explaining in practical terms what the administration’s executive orders and other executive actions mean for all of us. Each of these actions springs from the pages of Project 2025, the administration's 900-page playbook that serves as the foundation for these measures. The Project 2025 agenda should concern all of us, as it tracks strategies adopted by countries such as Hungary, which have eroded democratic norms and have adopted authoritarian approaches to governing.

Keep ReadingShow less