Skip to content
Search

Latest Stories

Follow Us:
Top Stories

AI could help remove bias from medical research and data

Opinion

Researcher looks at mammography test

Artificial intelligence can help root out racial bias in health care, but only if the programmers can create the software so it doesn't make the same mistakes people make, like misreading mammograms results, writes Pearl.

Anne-Christine Poujoulat/AFP via Getty Images
Pearl is a clinical professor of plastic surgery at the Stanford University School of Medicine and is on the faculty of the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.

This is the second entry in a two-part op-ed series on institutional racism in American medicine.

A little over a year before the coronavirus pandemic reached our shores, the racism problem in U.S. health care was making big headlines.

But it wasn't doctors or nurses being accused of bias. Rather, a study published in Science concluded that a predictive health care algorithm had, itself, discriminated against Black patients.

The story originated with Optum, a subsidiary of insurance giant UnitedHealth Group, which had designed an application to identify high-risk patients with untreated chronic diseases. The company's ultimate goal was to help re-distribute medical resources to those who'd benefit most from added care. And to figure out who was most in need, Optum's algorithm assessed the cost of each patient's past treatments.

Unaccounted for in the algorithm's design was this essential fact: The average Black patient receives $1,800 less per year in total medical care than a white person with the same set of health problems. And, sure enough, when the researchers went back and re-ranked patients by their illnesses (rather than the cost of their care), the percentage of Black patients who should have been enrolled in specialized care programs jumped from 18 percent to 47 percent.

Journalists and commentators pinned the blame for racial bias on Optum's algorithm. In reality, technology wasn't the problem. At issue were the doctors who failed to provide sufficient medical care to the Black patients in the first place. Meaning, the data was faulty because humans failed to provide equitable care.

Artificial intelligence and algorithmic approaches can only be as accurate, reliable and helpful as the data they're given. If the human inputs are unreliable, the data will be, as well.

Let's use the identification of breast cancer as an example. As much as one-third of the time, two radiologists looking at the same mammogram will disagree on the diagnosis. Therefore, if AI software were programmed to act like humans, the technology would be wrong one-third of the time.

Instead, AI can store and compare tens of thousands of mammogram images — comparing examples of women with cancer and without — to detect hundreds of subtle differences that humans often overlook. It can remember all those tiny differences when reviewing new mammograms, which is why AI is already estimated to be 10 percent more accurate than the average radiologist.

What AI can't recognize is whether it's being fed biased or incorrect information. Adjusting for bias in research and data aggregation requires that humans acknowledge their faulty assumptions and decisions, and then modify the inputs accordingly.

Correcting these types of errors should be standard practice by now. After all, any research project that seeks funding and publication is required to include an analysis of potential bias, based on the study's participants. As an example, investigators who want to compare people's health in two cities would be required to modify the study's design if they failed to account for major differences in age, education or other factors that might inappropriately tilt the results.

Given how often data is flawed, the possibility of racial bias should be explicitly factored into every AI project. With universities and funding agencies increasingly focused on racial issues in medicine, this expectation has the potential to become routine in the future. Once it is, AI will force researchers to confront bias in health care. As a result, the conclusions and recommendations they provide will be more accurate and equitable.

Thirteen months into the pandemic, Covid-19 continues to kill Black individuals at a rate three times higher than white people. For years, health plans and hospital leaders have talked about the need to address health disparities like these. And yet, despite good intentions, the solutions they put forth always look a lot like the failed efforts of the past.

Addressing systemic racism in medicine requires that we analyze far more data (all at once) than we do today. AI is the perfect application for this task. What we need is a national commitment to use these types of technologies to answer medicine's most urgent questions.

There is no antidote to the problem of racism in medicine. But combining AI with a national commitment to root out bias in health care would be a good start, putting our medical system on a path toward antiracism.


Read More

Zohran Mamdani’s call for warm ‘collectivism’ is dead on arrival

New York City Mayor Zohran Mamdani and his wife Rama Duwaji wave after his ceremonial inauguration as mayor at City Hall on Jan. 1, 2026, in New York.

(Spencer Platt/Getty Images/TNS)

Zohran Mamdani’s call for warm ‘collectivism’ is dead on arrival

The day before the Trump administration captured and extradited Venezuelan dictator Nicolás Maduro, many on the right (including yours truly) had a field day mocking something the newly minted mayor of New York City, Zohran Mamdani, said during his inaugural address.

The proud member of the Democratic Socialists of America proclaimed: “We will replace the frigidity of rugged individualism with the warmth of collectivism.”

Keep ReadingShow less
The Lie of “Safe” State Violence in America: Montgomery Then, Minneapolis Now

Police tape surrounds a vehicle suspected to be involved in a shooting by an ICE agent during federal law enforcement operations on January 07, 2026 in Minneapolis, Minnesota.

(Photo by Stephen Maturen/Getty Images)

The Lie of “Safe” State Violence in America: Montgomery Then, Minneapolis Now

Once again, the nation watched in horror as a 37-year-old woman was shot and killed by an ICE agent in Minneapolis. The incident was caught on video. Neighbors saw it happen, their disbelief clear. The story has been widely reported, but hearing it again does not make it any less violent. Video suggest, there was a confrontation. The woman tried to drive away. An agent stepped in front of her car. Multiple shots went through the windshield. Witnesses told reporters that a physician at the scene attempted to provide aid but was prevented from approaching the vehicle, a claim that federal authorities have not publicly addressed. That fact, if accurate, should trouble us most.

What happened on that street was more than just a tragic mistake. It was a moral challenge to our society, asking for more than just shock or sadness. This moment makes us ask: what kind of nation have we created, and what violence have we come to see as normal? We need to admit our shared responsibility, knowing that our daily choices and silence help create a culture where this violence is accepted. Including ourselves in this 'we' makes us care more deeply and pushes us to act, not just reflect.

Keep ReadingShow less
Washington Loves Blaming Latin America for Drugs — While Ignoring the American Appetite That Fuels the Trade
Screenshot from a video moments before US forces struck a boat in international waters off Venezuela, September 2.
Screenshot from a video moments before US forces struck a boat in international waters off Venezuela, September 2.

Washington Loves Blaming Latin America for Drugs — While Ignoring the American Appetite That Fuels the Trade

For decades, the United States has perfected a familiar political ritual: condemn Latin American governments for the flow of narcotics northward, demand crackdowns, and frame the crisis as something done to America rather than something America helps create. It is a narrative that travels well in press conferences and campaign rallies. It is also a distortion — one that obscures the central truth of the hemispheric drug trade: the U.S. market exists because Americans keep buying.

Yet Washington continues to treat Latin America as the culprit rather than the supplier responding to a demand created on U.S. soil. The result is a policy posture that is both ineffective and deeply hypocritical.

Keep ReadingShow less
The Failure of the International Community to Confront Trump

U.S. President Donald Trump at the White House on January 4, 2026, in Washington, D.C.

(Photo by Alex Wong/Getty Images)

The Failure of the International Community to Confront Trump

Donald Trump has just done one of the most audacious acts of his presidency: sending a military squad to Venezuela and kidnapping President Nicolas Maduro and his wife. Without question, this is a clear violation of international law regarding the sovereignty of nations.

The U.S. was not at war with Venezuela, nor has Trump/Congress declared war. There is absolutely no justification under international law for this action. Regardless of whether Maduro was involved in drug trafficking that impacted the United States, there is no justification for kidnapping him, the President of another country.

Keep ReadingShow less