Skip to content
Search

Latest Stories

Follow Us:
Top Stories

AI could help remove bias from medical research and data

Opinion

Researcher looks at mammography test

Artificial intelligence can help root out racial bias in health care, but only if the programmers can create the software so it doesn't make the same mistakes people make, like misreading mammograms results, writes Pearl.

Anne-Christine Poujoulat/AFP via Getty Images
Pearl is a clinical professor of plastic surgery at the Stanford University School of Medicine and is on the faculty of the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.

This is the second entry in a two-part op-ed series on institutional racism in American medicine.

A little over a year before the coronavirus pandemic reached our shores, the racism problem in U.S. health care was making big headlines.

But it wasn't doctors or nurses being accused of bias. Rather, a study published in Science concluded that a predictive health care algorithm had, itself, discriminated against Black patients.

The story originated with Optum, a subsidiary of insurance giant UnitedHealth Group, which had designed an application to identify high-risk patients with untreated chronic diseases. The company's ultimate goal was to help re-distribute medical resources to those who'd benefit most from added care. And to figure out who was most in need, Optum's algorithm assessed the cost of each patient's past treatments.

Unaccounted for in the algorithm's design was this essential fact: The average Black patient receives $1,800 less per year in total medical care than a white person with the same set of health problems. And, sure enough, when the researchers went back and re-ranked patients by their illnesses (rather than the cost of their care), the percentage of Black patients who should have been enrolled in specialized care programs jumped from 18 percent to 47 percent.

Journalists and commentators pinned the blame for racial bias on Optum's algorithm. In reality, technology wasn't the problem. At issue were the doctors who failed to provide sufficient medical care to the Black patients in the first place. Meaning, the data was faulty because humans failed to provide equitable care.

Artificial intelligence and algorithmic approaches can only be as accurate, reliable and helpful as the data they're given. If the human inputs are unreliable, the data will be, as well.

Let's use the identification of breast cancer as an example. As much as one-third of the time, two radiologists looking at the same mammogram will disagree on the diagnosis. Therefore, if AI software were programmed to act like humans, the technology would be wrong one-third of the time.

Instead, AI can store and compare tens of thousands of mammogram images — comparing examples of women with cancer and without — to detect hundreds of subtle differences that humans often overlook. It can remember all those tiny differences when reviewing new mammograms, which is why AI is already estimated to be 10 percent more accurate than the average radiologist.

What AI can't recognize is whether it's being fed biased or incorrect information. Adjusting for bias in research and data aggregation requires that humans acknowledge their faulty assumptions and decisions, and then modify the inputs accordingly.

Correcting these types of errors should be standard practice by now. After all, any research project that seeks funding and publication is required to include an analysis of potential bias, based on the study's participants. As an example, investigators who want to compare people's health in two cities would be required to modify the study's design if they failed to account for major differences in age, education or other factors that might inappropriately tilt the results.

Given how often data is flawed, the possibility of racial bias should be explicitly factored into every AI project. With universities and funding agencies increasingly focused on racial issues in medicine, this expectation has the potential to become routine in the future. Once it is, AI will force researchers to confront bias in health care. As a result, the conclusions and recommendations they provide will be more accurate and equitable.

Thirteen months into the pandemic, Covid-19 continues to kill Black individuals at a rate three times higher than white people. For years, health plans and hospital leaders have talked about the need to address health disparities like these. And yet, despite good intentions, the solutions they put forth always look a lot like the failed efforts of the past.

Addressing systemic racism in medicine requires that we analyze far more data (all at once) than we do today. AI is the perfect application for this task. What we need is a national commitment to use these types of technologies to answer medicine's most urgent questions.

There is no antidote to the problem of racism in medicine. But combining AI with a national commitment to root out bias in health care would be a good start, putting our medical system on a path toward antiracism.


Read More

Facts about Alex Pretti’s death are undeniable. The White House is denying them anyway

A rosary adorns a framed photo Alex Pretti that was left at a makeshift memorial in the area where Pretti was shot dead a day earlier by federal immigration agents in Minneapolis, on Jan. 25, 2026.

(Tribune Content Agency)

Facts about Alex Pretti’s death are undeniable. The White House is denying them anyway

The killing of Alex Pretti was unjust and unjustified. While protesting — aka “observing” or “interfering with” — deportation operations, the VA hospital ICU nurse came to the aid of two protesters, one of whom had been slammed to the ground by a U.S. Customs and Border Protection agent. With a phone in one hand, Pretti used the other hand, in vain, to protect his eyes while being pepper sprayed. Knocked to the ground, Pretti was repeatedly smashed in the face with the spray can, pummeled by multiple agents, disarmed of his holstered legal firearm and then shot nine or 10 times.

Note the sequence. He was disarmed and then he was shot.

Keep ReadingShow less
The Deadly Shooting in Minneapolis and How It Impacts the Rights of All Americans

A portrait of Renee Good is placed at a memorial near the site where she was killed a week ago, on January 14, 2026 in Minneapolis, Minnesota. Good was fatally shot by an immigration enforcement agent during an incident in south Minneapolis on January 7.

(Photo by Stephen Maturen/Getty Images)

The Deadly Shooting in Minneapolis and How It Impacts the Rights of All Americans

Thomas Paine famously wrote, "These are the times that try men's souls," when writing about the American Revolution. One could say that every week of Donald Trump's second administration has been such a time for much of the country.

One of the most important questions of the moment is: Was the ICE agent who shot Renee Good guilty of excessive use of force or murder, or was he acting in self-defense because Good was attempting to run him over, as claimed by the Trump administration? Local police and other Minneapolis authorities dispute the government's version of the events.

Keep ReadingShow less
Someone tipping the scales of justice.

Retaliatory prosecutions and political score-settling mark a grave threat to the rule of law, constitutional rights, and democratic accountability.

Getty Images, sommart

White House ‘Score‑Settling’ Raises Fears of a Weaponized Government

The recent casual acknowledgement by the White House Chief of Staff that the President is engaged in prosecutorial “score settling” marks a dangerous departure from the rule-of-law norms that restrain executive power in a constitutional democracy. This admission that the State is using its legal authority to punish perceived enemies is antithetical to core Constitutional principles and the rule of law.

The American experiment was built on the rejection of personal rule and political revenge, replacing it with laws that bind even those who hold the highest offices. In 1776, Thomas Paine wrote, “For as in absolute governments the King is law, so in free countries the law ought to be King; and there ought to be no other.” The essence of these words can be found in our Constitution that deliberately placed power in the hands of three co-equal branches of government–Legislative, Executive, and Judicial.

Keep ReadingShow less
Trump’s Greenland folly hated by voters, GOP

U.S. President Donald Trump (R) speaks with NATO's Secretary-General Mark Rutte during a bilateral meeting on the sidelines of the World Economic Forum (WEF) annual meeting in Davos, Switzerland, on Jan. 21, 2026.

(Mandel NGAN/AFP via Getty Images/TCA)

Trump’s Greenland folly hated by voters, GOP

“We cannot live our lives or govern our countries based on social media posts.”

That’s what a European Union official, who was directly involved in negotiations between the U.S. and Europe over Greenland, said following President Trump’s announcement via Truth Social that we’ve “formed the framework of a future deal with respect to Greenland and, in fact, the entire Arctic Region.”

Keep ReadingShow less