Skip to content
Search

Latest Stories

Top Stories

AI could help remove bias from medical research and data

Researcher looks at mammography test

Artificial intelligence can help root out racial bias in health care, but only if the programmers can create the software so it doesn't make the same mistakes people make, like misreading mammograms results, writes Pearl.

Anne-Christine Poujoulat/AFP via Getty Images
Pearl is a clinical professor of plastic surgery at the Stanford University School of Medicine and is on the faculty of the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.

This is the second entry in a two-part op-ed series on institutional racism in American medicine.

A little over a year before the coronavirus pandemic reached our shores, the racism problem in U.S. health care was making big headlines.

But it wasn't doctors or nurses being accused of bias. Rather, a study published in Science concluded that a predictive health care algorithm had, itself, discriminated against Black patients.

The story originated with Optum, a subsidiary of insurance giant UnitedHealth Group, which had designed an application to identify high-risk patients with untreated chronic diseases. The company's ultimate goal was to help re-distribute medical resources to those who'd benefit most from added care. And to figure out who was most in need, Optum's algorithm assessed the cost of each patient's past treatments.

Unaccounted for in the algorithm's design was this essential fact: The average Black patient receives $1,800 less per year in total medical care than a white person with the same set of health problems. And, sure enough, when the researchers went back and re-ranked patients by their illnesses (rather than the cost of their care), the percentage of Black patients who should have been enrolled in specialized care programs jumped from 18 percent to 47 percent.

Journalists and commentators pinned the blame for racial bias on Optum's algorithm. In reality, technology wasn't the problem. At issue were the doctors who failed to provide sufficient medical care to the Black patients in the first place. Meaning, the data was faulty because humans failed to provide equitable care.

Artificial intelligence and algorithmic approaches can only be as accurate, reliable and helpful as the data they're given. If the human inputs are unreliable, the data will be, as well.

Let's use the identification of breast cancer as an example. As much as one-third of the time, two radiologists looking at the same mammogram will disagree on the diagnosis. Therefore, if AI software were programmed to act like humans, the technology would be wrong one-third of the time.

Instead, AI can store and compare tens of thousands of mammogram images — comparing examples of women with cancer and without — to detect hundreds of subtle differences that humans often overlook. It can remember all those tiny differences when reviewing new mammograms, which is why AI is already estimated to be 10 percent more accurate than the average radiologist.

What AI can't recognize is whether it's being fed biased or incorrect information. Adjusting for bias in research and data aggregation requires that humans acknowledge their faulty assumptions and decisions, and then modify the inputs accordingly.

Correcting these types of errors should be standard practice by now. After all, any research project that seeks funding and publication is required to include an analysis of potential bias, based on the study's participants. As an example, investigators who want to compare people's health in two cities would be required to modify the study's design if they failed to account for major differences in age, education or other factors that might inappropriately tilt the results.

Given how often data is flawed, the possibility of racial bias should be explicitly factored into every AI project. With universities and funding agencies increasingly focused on racial issues in medicine, this expectation has the potential to become routine in the future. Once it is, AI will force researchers to confront bias in health care. As a result, the conclusions and recommendations they provide will be more accurate and equitable.

Thirteen months into the pandemic, Covid-19 continues to kill Black individuals at a rate three times higher than white people. For years, health plans and hospital leaders have talked about the need to address health disparities like these. And yet, despite good intentions, the solutions they put forth always look a lot like the failed efforts of the past.

Addressing systemic racism in medicine requires that we analyze far more data (all at once) than we do today. AI is the perfect application for this task. What we need is a national commitment to use these types of technologies to answer medicine's most urgent questions.

There is no antidote to the problem of racism in medicine. But combining AI with a national commitment to root out bias in health care would be a good start, putting our medical system on a path toward antiracism.

Read More

Donald Trump

Trump's reliance on inflammatory, and often dehumanizing, language is not an unfortunate quirk—it’s a deliberate tactic.

Jabin Botsford/The Washington Post via Getty Images

From ‘Obliteration’ to ‘Enemies Within’: Trump’s Language Echoes Authoritarianism

When President Trump declared that the U.S. strikes “obliterated” Iran’s nuclear program, it wasn’t just a policy claim—it was an exercise in narrative control. Predictably, his assertion was met with both support and skepticism. Yet more than a comment on military efficacy, the statement falls into a broader pattern that underscores how Trump uses language not just to communicate but to dominate.

Alongside top officials like CIA Director John Ratcliffe and Defense Secretary Pete Hegseth, Trump claimed the strikes set Iran’s nuclear ambitions back by years. However, conflicting intelligence assessments tell a more nuanced story. A leaked Defense Intelligence Agency report concluded that while infrastructure was damaged and entrances sealed, core components such as centrifuges remained largely intact. Iran had already relocated much of its enriched uranium. The International Atomic Energy Agency echoed that damage was reparable.

Keep ReadingShow less
Trump Shows That Loyalty Is All That Matters to Him

Guests in the audience await the arrival of U.S. Vice President Mike Pence during the Federalist Society's Executive Branch Review Conference at The Mayflower Hotel on April 25, 2023, in Washington, D.C.

Drew Angerer/Getty Images/TNS

Trump Shows That Loyalty Is All That Matters to Him

Last week, the Court of International Trade delivered a blow to Donald Trump’s global trade war. It found that the worldwide tariffs Trump unveiled on “Liberation Day” as well his earlier tariffs pretextually aimed at stopping fentanyl coming in from Mexico and Canada (as if) were beyond his authority. The three-judge panel was surely right about the Liberation Day tariffs and probably right about the fentanyl tariffs, but there’s a better case that, while bad policy, the fentanyl tariffs were not unlawful.

Please forgive a lengthy excerpt of Trump’s response on Truth Social, but it speaks volumes:

Keep ReadingShow less
Democrats, Gavin Newsom Is Not Your Blueprint

California Governor Gavin Newsom (right) speaks as California Attorney general Rob Bonta looks on during a news conference at Gemperle Orchard on April 16, 2025, in Ceres, California.

Justin Sullivan/Getty Images/TCA

Democrats, Gavin Newsom Is Not Your Blueprint

Few in American politics are as desperate as California Gov. Gavin Newsom is right now.

Newsom, long considered — by himself, anyway — a frontrunner for the Democratic nomination for president, has been positioning himself and repositioning himself to be next in line for years.

Keep ReadingShow less
Americans Want To Rein In Presidential Power

Protestors march during an anti-Trump "No Kings Day" demonstration in a city that has been the focus of protests against Trump's immigration raids on June 14, 2025 in downtown Los Angeles, California.

Getty Images, Jay L Clendenin

Americans Want To Rein In Presidential Power

President Trump has been attempting to expand presidential power more than any president in recent history, in large part by asserting powers that have been held by Congress, including federal funding and tariffs. Public opinion research has shown clearly and consistently that large majorities—often bipartisan—oppose expanding presidential powers and support giving Congress more power.

The Pew Research Center has asked for nearly a decade whether presidents should not have to “worry so much about Congress and the courts” or if giving presidents more power is “too risky.” Over seven in ten have consistently said that giving presidents more power would be too risky, including majorities of Democrats and Republicans, no matter which party is in power. In February 2025, 66% of Republicans and 89% of Democrats took this position.

Keep ReadingShow less