Skip to content
Search

Latest Stories

Follow Us:
Top Stories

AI could help remove bias from medical research and data

Opinion

Researcher looks at mammography test

Artificial intelligence can help root out racial bias in health care, but only if the programmers can create the software so it doesn't make the same mistakes people make, like misreading mammograms results, writes Pearl.

Anne-Christine Poujoulat/AFP via Getty Images
Pearl is a clinical professor of plastic surgery at the Stanford University School of Medicine and is on the faculty of the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.

This is the second entry in a two-part op-ed series on institutional racism in American medicine.

A little over a year before the coronavirus pandemic reached our shores, the racism problem in U.S. health care was making big headlines.

But it wasn't doctors or nurses being accused of bias. Rather, a study published in Science concluded that a predictive health care algorithm had, itself, discriminated against Black patients.

The story originated with Optum, a subsidiary of insurance giant UnitedHealth Group, which had designed an application to identify high-risk patients with untreated chronic diseases. The company's ultimate goal was to help re-distribute medical resources to those who'd benefit most from added care. And to figure out who was most in need, Optum's algorithm assessed the cost of each patient's past treatments.

Unaccounted for in the algorithm's design was this essential fact: The average Black patient receives $1,800 less per year in total medical care than a white person with the same set of health problems. And, sure enough, when the researchers went back and re-ranked patients by their illnesses (rather than the cost of their care), the percentage of Black patients who should have been enrolled in specialized care programs jumped from 18 percent to 47 percent.

Journalists and commentators pinned the blame for racial bias on Optum's algorithm. In reality, technology wasn't the problem. At issue were the doctors who failed to provide sufficient medical care to the Black patients in the first place. Meaning, the data was faulty because humans failed to provide equitable care.

Artificial intelligence and algorithmic approaches can only be as accurate, reliable and helpful as the data they're given. If the human inputs are unreliable, the data will be, as well.

Let's use the identification of breast cancer as an example. As much as one-third of the time, two radiologists looking at the same mammogram will disagree on the diagnosis. Therefore, if AI software were programmed to act like humans, the technology would be wrong one-third of the time.

Instead, AI can store and compare tens of thousands of mammogram images — comparing examples of women with cancer and without — to detect hundreds of subtle differences that humans often overlook. It can remember all those tiny differences when reviewing new mammograms, which is why AI is already estimated to be 10 percent more accurate than the average radiologist.

What AI can't recognize is whether it's being fed biased or incorrect information. Adjusting for bias in research and data aggregation requires that humans acknowledge their faulty assumptions and decisions, and then modify the inputs accordingly.

Correcting these types of errors should be standard practice by now. After all, any research project that seeks funding and publication is required to include an analysis of potential bias, based on the study's participants. As an example, investigators who want to compare people's health in two cities would be required to modify the study's design if they failed to account for major differences in age, education or other factors that might inappropriately tilt the results.

Given how often data is flawed, the possibility of racial bias should be explicitly factored into every AI project. With universities and funding agencies increasingly focused on racial issues in medicine, this expectation has the potential to become routine in the future. Once it is, AI will force researchers to confront bias in health care. As a result, the conclusions and recommendations they provide will be more accurate and equitable.

Thirteen months into the pandemic, Covid-19 continues to kill Black individuals at a rate three times higher than white people. For years, health plans and hospital leaders have talked about the need to address health disparities like these. And yet, despite good intentions, the solutions they put forth always look a lot like the failed efforts of the past.

Addressing systemic racism in medicine requires that we analyze far more data (all at once) than we do today. AI is the perfect application for this task. What we need is a national commitment to use these types of technologies to answer medicine's most urgent questions.

There is no antidote to the problem of racism in medicine. But combining AI with a national commitment to root out bias in health care would be a good start, putting our medical system on a path toward antiracism.


Read More

MAGA is starting to question Trump

President Donald Trump speaks to members of the press aboard Air Force One on April 17, 2026, just prior to landing at Joint Base Andrews, Maryland.

(Win McNamee/Getty Images/TCA)

MAGA is starting to question Trump

If supporters of Donald Trump were to be studied — and I very much expect they will be for years and years to come — academics may be hard-pressed to find the connective tissue that unites them all together.

It’s clear they’re not with Trump for his ideology — he doesn’t really have one, not that hews to ideas espoused by the traditional political parties at least. His policies have been all over the map, and even within his own presidencies he’s reversed them substantively or abandoned them outright.

Keep ReadingShow less
Florida Democrat resigns, moments before the Ethics Committee was supposed to weigh her expulsion

House Ethics Committee Chair Michael Guest, R-Miss., says the committee is committed to accountability for members of Congress on both sides of the aisle.

(Photo by Samantha Freeman, MNS)

Florida Democrat resigns, moments before the Ethics Committee was supposed to weigh her expulsion

WASHINGTON – Florida Democrat Rep. Sheila Cherfilus-McCormick resigned from the House of Representatives on Tuesday, moments before the full Ethics Committee convened to weigh expulsion for allegedly stealing millions of dollars and funneling some into her congressional campaign.

Cherfilus-McCormick was not present at the hearing. “After careful reflection and prayer, I have concluded that it is in the best interest of my constituents and the institution that I step aside at this time,” her statement read.

Keep ReadingShow less
People protesting in the Cannon House Office Building on Capitol Hill, holding tulips and signs that read, "We can't afford another war" and "end the war on iran.'

Veterans, military family members, and supporters occupy the Cannon House Office Building on Capitol Hill calling upon the Trump administration to end the war on Iran on April 20, 2026 in Washington, DC.

Getty Images, Leigh Vogel

Trump’s Iran “Victory” Echoes Iraq’s "Mission Accomplished"

It didn’t exactly end well the last time a president declared victory this quickly. On May 1, 2003, President George W. Bush landed on the USS Abraham Lincoln in a flight suit, strutted across the deck for the cameras, then changed into a suit and tie, stood in front of a banner that read “Mission Accomplished,” and declared the end of major combat operations in Iraq. It was 43 days after the invasion began. Over the next eight years, as the conflict devolved into a protracted insurgency and sectarian war, more than 4,300 Americans and hundreds of thousands of Iraqis died.

On April 7, Trump—presumably not wearing a flight suit—declared in a telephone interview with AFP that the United States had achieved victory in Iran. “Total and complete victory. 100 percent. No question about it.” This was the day after the President threatened to destroy a “whole civilization,” hours after a two-week ceasefire was announced. It took six days for the whole thing to fall apart. By April 15, he was back on Fox Business: “We've beaten them militarily, totally. I think it’s close to over.”

Keep ReadingShow less
A Lesson on “Matters of Morality” for the Vice President

American Cardinal Robert Francis Prevost presides over his first Holy Mass as Pope Leo XIV with cardinals in the Sistine Chapel at the conclusion of the Conclave on May 09, 2025 in Vatican City, Vatican.

(Photo by Simone Risoluti - Vatican Media via Vatican Pool/Getty Images)

A Lesson on “Matters of Morality” for the Vice President

The Vice President has stepped into the fray between the President and Pope Leo. For those of you who have not been following this, Pope Leo has been critical of various things that Trump has said regarding his war with Iran, including his statement that he was ready to wipe out the civilization. In response, Trump called Pope Leo too liberal and easy on crime. He also said that the Pope was only elected because he was an American, in response to Trump having been elected President. In response, the Pope said that he had no fear of the Trump administration and that his job was to preach the gospel. He said in response to Secretary of War Hegseth's invoking the name of Jesus for support in battle, that Jesus “does not listen to the prayers of those who wage war, but rejects them.”

Into this exchange steps the Vice President, who says he thinks the Pope should stick to "matters of morality" and let the President of the United States dictate American public policy. The Vice President obviously doesn't understand the meaning of morality and its scope.

Keep ReadingShow less