Skip to content
Search

Latest Stories

Follow Us:
Top Stories

AI could help remove bias from medical research and data

Opinion

Researcher looks at mammography test

Artificial intelligence can help root out racial bias in health care, but only if the programmers can create the software so it doesn't make the same mistakes people make, like misreading mammograms results, writes Pearl.

Anne-Christine Poujoulat/AFP via Getty Images
Pearl is a clinical professor of plastic surgery at the Stanford University School of Medicine and is on the faculty of the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.

This is the second entry in a two-part op-ed series on institutional racism in American medicine.

A little over a year before the coronavirus pandemic reached our shores, the racism problem in U.S. health care was making big headlines.

But it wasn't doctors or nurses being accused of bias. Rather, a study published in Science concluded that a predictive health care algorithm had, itself, discriminated against Black patients.

The story originated with Optum, a subsidiary of insurance giant UnitedHealth Group, which had designed an application to identify high-risk patients with untreated chronic diseases. The company's ultimate goal was to help re-distribute medical resources to those who'd benefit most from added care. And to figure out who was most in need, Optum's algorithm assessed the cost of each patient's past treatments.

Unaccounted for in the algorithm's design was this essential fact: The average Black patient receives $1,800 less per year in total medical care than a white person with the same set of health problems. And, sure enough, when the researchers went back and re-ranked patients by their illnesses (rather than the cost of their care), the percentage of Black patients who should have been enrolled in specialized care programs jumped from 18 percent to 47 percent.

Journalists and commentators pinned the blame for racial bias on Optum's algorithm. In reality, technology wasn't the problem. At issue were the doctors who failed to provide sufficient medical care to the Black patients in the first place. Meaning, the data was faulty because humans failed to provide equitable care.

Artificial intelligence and algorithmic approaches can only be as accurate, reliable and helpful as the data they're given. If the human inputs are unreliable, the data will be, as well.

Let's use the identification of breast cancer as an example. As much as one-third of the time, two radiologists looking at the same mammogram will disagree on the diagnosis. Therefore, if AI software were programmed to act like humans, the technology would be wrong one-third of the time.

Instead, AI can store and compare tens of thousands of mammogram images — comparing examples of women with cancer and without — to detect hundreds of subtle differences that humans often overlook. It can remember all those tiny differences when reviewing new mammograms, which is why AI is already estimated to be 10 percent more accurate than the average radiologist.

What AI can't recognize is whether it's being fed biased or incorrect information. Adjusting for bias in research and data aggregation requires that humans acknowledge their faulty assumptions and decisions, and then modify the inputs accordingly.

Correcting these types of errors should be standard practice by now. After all, any research project that seeks funding and publication is required to include an analysis of potential bias, based on the study's participants. As an example, investigators who want to compare people's health in two cities would be required to modify the study's design if they failed to account for major differences in age, education or other factors that might inappropriately tilt the results.

Given how often data is flawed, the possibility of racial bias should be explicitly factored into every AI project. With universities and funding agencies increasingly focused on racial issues in medicine, this expectation has the potential to become routine in the future. Once it is, AI will force researchers to confront bias in health care. As a result, the conclusions and recommendations they provide will be more accurate and equitable.

Thirteen months into the pandemic, Covid-19 continues to kill Black individuals at a rate three times higher than white people. For years, health plans and hospital leaders have talked about the need to address health disparities like these. And yet, despite good intentions, the solutions they put forth always look a lot like the failed efforts of the past.

Addressing systemic racism in medicine requires that we analyze far more data (all at once) than we do today. AI is the perfect application for this task. What we need is a national commitment to use these types of technologies to answer medicine's most urgent questions.

There is no antidote to the problem of racism in medicine. But combining AI with a national commitment to root out bias in health care would be a good start, putting our medical system on a path toward antiracism.


Read More

What the end of Viktor Orban means for the New Right

Hungary's Prime Minister Viktor Orban salutes supporters at the Balna center in Budapest during a general election in Hungary, on April 12, 2026.

(Attila Kisbenedek/AFP/Getty Images/TNS)

What the end of Viktor Orban means for the New Right

Viktor Orban, the proudly “illiberal” prime minister of Hungary, beloved by various New Right nationalists and MAGA American intellectuals, was crushed at the polls this weekend.

Over the last decade or so, Hungary became for the New Right what Sweden or Cuba were to the Old Left. For generations, various American leftists loved to cite the Cuban model as better than ours when it came to healthcare, or education. Some would even make wild claims about freedom under Fidel Castro’s dictatorship. Susan Sontag famously proclaimed in 1969 that no Cuban writer “has been or is in jail or is failing to get his works published.” This was simply not true. The still young regime had already imprisoned, tortured or executed scores of intellectuals. (Sontag later recanted.)

Keep ReadingShow less
A broadcast set up that displays feed of President Trump.

An NBC News live feed airs a clip from U.S. President Donald Trump's Truth Social video announcement in the White House James S. Brady Press Briefing Room on February 28, 2026 in Washington, DC. U.S. President Donald Trump announced that the United States and Israel had launched an attack on Iran Saturday morning.

Getty Images, Anna Moneymaker

When a President Threatens a Civilization, Silence Becomes Permission

Ninety minutes before his own deadline expired, President Trump agreed to pause his threatened strikes on Iran. The ceasefire was real. The relief was understandable. And none of it changes what happened.

In the days leading up to Tuesday’s deadline, the President of the United States threatened to destroy “every” bridge and power plant in Iran. He warned that “a whole civilization will die tonight, never to be brought back again." He said Iran “can be taken out” in a single night. These were not the ravings of a fringe provocateur. They were statements of declared intent from the commander-in-chief of the most powerful military on earth, broadcast to the world.

Keep ReadingShow less
America Cannot Function without Experts
a group of people sitting on top of a lush green field

America Cannot Function without Experts

America is facing a preventable national safety crisis because expertise is increasingly sidelined at the highest levels of government. In the first three months of 2026, at least 14 people have died in U.S. immigration detention centers — a surge that has drawn international criticism and underscored how life‑and‑death decisions depend on qualified leadership. When those entrusted with safeguarding the public lack the knowledge or are chosen for loyalty instead of competence, danger rarely announces itself. It arrives quietly, through misjudgments no one is prepared to correct.

That warning is urgent today. With Markwayne Mullin now leading the Department of Homeland Security amid rising scrutiny of immigration enforcement, questions about expertise are no longer abstract. Recent reporting shows a dozen detainee deaths in Immigration and Customs Enforcement custody this year, highlighting systemic risks where leadership decisions have life‑and‑death consequences.

Keep ReadingShow less
Why Trump’s antics don’t work on our allies

From left to right: Ukraine's President Volodymyr Zelensky, Britain's Prime Minister Keir Starmer and France's President Emmanuel Macron hold a meeting during a summit at Lancaster House on March 2, 2025, in London, England.

(Justin Tallis/WPA Pool/Getty Images/TNS)

Why Trump’s antics don’t work on our allies

It is among the most familiar patterns of the Trump era. First, the president says or does something weird, rude or otherwise norm-defying. Some elected Republicans object, and the response from Trump and his minions is to shoot the messenger. The dynamic holds constant whether it’s big (January 6 pardons) or small (tweeting “covfefe” just after midnight).

The essence of this low-road-for-me-high-road-for-thee dynamic rests on the belief that Trumpism is a one-way road. Insulting Trump, deservedly or not, is forbidden, while Trump’s antics should be celebrated when possible, defended when necessary, or ignored when neither of those responses is possible. But he should never, ever face consequences for his own actions.

Keep ReadingShow less