Skip to content
Search

Latest Stories

Follow Us:
Top Stories

AI could help remove bias from medical research and data

Opinion

Researcher looks at mammography test

Artificial intelligence can help root out racial bias in health care, but only if the programmers can create the software so it doesn't make the same mistakes people make, like misreading mammograms results, writes Pearl.

Anne-Christine Poujoulat/AFP via Getty Images
Pearl is a clinical professor of plastic surgery at the Stanford University School of Medicine and is on the faculty of the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.

This is the second entry in a two-part op-ed series on institutional racism in American medicine.

A little over a year before the coronavirus pandemic reached our shores, the racism problem in U.S. health care was making big headlines.

But it wasn't doctors or nurses being accused of bias. Rather, a study published in Science concluded that a predictive health care algorithm had, itself, discriminated against Black patients.

The story originated with Optum, a subsidiary of insurance giant UnitedHealth Group, which had designed an application to identify high-risk patients with untreated chronic diseases. The company's ultimate goal was to help re-distribute medical resources to those who'd benefit most from added care. And to figure out who was most in need, Optum's algorithm assessed the cost of each patient's past treatments.

Unaccounted for in the algorithm's design was this essential fact: The average Black patient receives $1,800 less per year in total medical care than a white person with the same set of health problems. And, sure enough, when the researchers went back and re-ranked patients by their illnesses (rather than the cost of their care), the percentage of Black patients who should have been enrolled in specialized care programs jumped from 18 percent to 47 percent.

Journalists and commentators pinned the blame for racial bias on Optum's algorithm. In reality, technology wasn't the problem. At issue were the doctors who failed to provide sufficient medical care to the Black patients in the first place. Meaning, the data was faulty because humans failed to provide equitable care.

Artificial intelligence and algorithmic approaches can only be as accurate, reliable and helpful as the data they're given. If the human inputs are unreliable, the data will be, as well.

Let's use the identification of breast cancer as an example. As much as one-third of the time, two radiologists looking at the same mammogram will disagree on the diagnosis. Therefore, if AI software were programmed to act like humans, the technology would be wrong one-third of the time.

Instead, AI can store and compare tens of thousands of mammogram images — comparing examples of women with cancer and without — to detect hundreds of subtle differences that humans often overlook. It can remember all those tiny differences when reviewing new mammograms, which is why AI is already estimated to be 10 percent more accurate than the average radiologist.

What AI can't recognize is whether it's being fed biased or incorrect information. Adjusting for bias in research and data aggregation requires that humans acknowledge their faulty assumptions and decisions, and then modify the inputs accordingly.

Correcting these types of errors should be standard practice by now. After all, any research project that seeks funding and publication is required to include an analysis of potential bias, based on the study's participants. As an example, investigators who want to compare people's health in two cities would be required to modify the study's design if they failed to account for major differences in age, education or other factors that might inappropriately tilt the results.

Given how often data is flawed, the possibility of racial bias should be explicitly factored into every AI project. With universities and funding agencies increasingly focused on racial issues in medicine, this expectation has the potential to become routine in the future. Once it is, AI will force researchers to confront bias in health care. As a result, the conclusions and recommendations they provide will be more accurate and equitable.

Thirteen months into the pandemic, Covid-19 continues to kill Black individuals at a rate three times higher than white people. For years, health plans and hospital leaders have talked about the need to address health disparities like these. And yet, despite good intentions, the solutions they put forth always look a lot like the failed efforts of the past.

Addressing systemic racism in medicine requires that we analyze far more data (all at once) than we do today. AI is the perfect application for this task. What we need is a national commitment to use these types of technologies to answer medicine's most urgent questions.

There is no antidote to the problem of racism in medicine. But combining AI with a national commitment to root out bias in health care would be a good start, putting our medical system on a path toward antiracism.

Read More

After the Ceasefire, the Violence Continues – and Cries for New Words

An Israeli army vehicle moves on the Israeli side, near the border with the Gaza Strip on November 18, 2025 in Southern Israel, Israel.

(Photo by Amir Levy/Getty Images)

After the Ceasefire, the Violence Continues – and Cries for New Words

Since October 10, 2025, the day when the US-brokered ceasefire between Israel and Hamas was announced, Israel has killed at least 401 civilians, including at least 148 children. This has led Palestinian scholar Saree Makdisi to decry a “continuing genocide, albeit one that has shifted gears and has—for now—moved into the slow lane. Rather than hundreds at a time, it is killing by twos and threes” or by twenties and thirties as on November 19 and November 23 – “an obscenity that has coalesced into a new normal.” The Guardian columnist Nesrine Malik describes the post-ceasefire period as nothing more than a “reducefire,” quoting the warning issued by Amnesty International’s secretary general Agnès Callamard that the ”world must not be fooled” into believing that Israel’s genocide is over.

A visual analysis of satellite images conducted by the BBC has established that since the declared ceasefire, “the destruction of buildings in Gaza by the Israeli military has been continuing on a huge scale,” entire neighborhoods “levelled” through “demolitions,” including large swaths of farmland and orchards. The Guardian reported already in March of 2024, that satellite imagery proved the “destruction of about 38-48% of tree cover and farmland” and 23% of Gaza’s greenhouses “completely destroyed.” Writing about the “colossal violence” Israel has wrought on Gaza, Palestinian legal scholar Rabea Eghbariah lists “several variations” on the term “genocide” which researchers found the need to introduce, such as “urbicide” (the systematic destruction of cities), “domicide” (systematic destruction of housing), “sociocide,” “politicide,” and “memoricide.” Others have added the concepts “ecocide,” “scholasticide” (the systematic destruction of Gaza’s schools, universities, libraries), and “medicide” (the deliberate attacks on all aspects of Gaza’s healthcare with the intent to “wipe out” all medical care). It is only the combination of all these “-cides,” all amounting to massive war crimes, that adequately manages to describe the Palestinian condition. Constantine Zurayk introduced the term “Nakba” (“catastrophe” in Arabic) in 1948 to name the unparalleled “magnitude and ramifications of the Zionist conquest of Palestine” and its historical “rupture.” When Eghbariah argues for “Nakba” as a “new legal concept,” he underlines, however, that to understand its magnitude, one needs to go back to the 1917 Balfour Declaration, in which the British colonial power promised “a national home for the Jewish people” in Palestine, even though just 6 % of its population were Jewish. From Nakba as the “constitutive violence of 1948,” we need today to conceptualize “Nakba as a structure,” an “overarching frame.”

Keep ReadingShow less
Ukraine, Russia, and the Dangerous Metaphor of Holding the Cards
a hand holding a deck of cards in front of a christmas tree
Photo by Luca Volpe on Unsplash

Ukraine, Russia, and the Dangerous Metaphor of Holding the Cards

Donald Trump has repeatedly used the phrase “holding the cards” during his tenure as President to signal that he, or sometimes an opponent, has the upper hand. The metaphor projects bravado, leverage, and the inevitability of success or failure, depending on who claims control.

Unfortunately, Trump’s repeated invocation of “holding the cards” embodies a worldview where leverage, bluff, and dominance matter more than duty, morality, or responsibility. In contrast, leadership grounded in duty emphasizes ethical obligations to allies, citizens, and democratic principles—elements strikingly absent from this metaphor.

Keep ReadingShow less
Beyond Apologies: Corporate Contempt and the Call for Real Accountability
campbells chicken noodle soup can

Beyond Apologies: Corporate Contempt and the Call for Real Accountability

Most customers carry a particular image of Campbell's Soup: the red-and-white label stacked on a pantry shelf, a touch of nostalgia, and the promise of a dependable bargain. It's food for snow days, tight budgets, and the middle of the week. For generations, the brand has positioned itself as a companion to working families, offering "good food" for everyday people. The company cultivated that trust so thoroughly that it became almost cliché.

Campbell's episode, now the subject of national headlines and an ongoing high-profile legal complaint, is troubling not only for its blunt language but for what it reveals about the hidden injuries that erode the social contract linking institutions to citizens, workers to workplaces, and brands to buyers. If the response ends with the usual PR maneuvers—rapid firings and the well-rehearsed "this does not reflect our values" statement. Then both the lesson and the opportunity for genuine reform by a company or individual are lost. To grasp what this controversy means for the broader corporate landscape, we first have to examine how leadership reveals its actual beliefs.

Keep ReadingShow less
Donald Trump

When ego replaces accountability in the presidency, democracy weakens. An analysis of how unchecked leadership erodes trust, institutions, and the rule of law.

Brandon Bell/Getty Images

When Leaders Put Ego Above Accountability—Democracy At Risk

What has become of America’s presidency? Once a symbol of dignity and public service, the office now appears chaotic, ego‑driven, and consumed by spectacle over substance. When personal ambition replaces accountability, the consequences extend far beyond politics — they erode trust, weaken institutions, and threaten democracy itself.

When leaders place ego above accountability, democracy falters. Weak leaders seek to appear powerful. Strong leaders accept responsibility.

Keep ReadingShow less