• Home
  • Independent Voter News
  • Quizzes
  • Election Dissection
  • Sections
  • Events
  • Directory
  • About Us
  • Glossary
  • Opinion
  • Campaign Finance
  • Redistricting
  • Civic Ed
  • Voting
  • Fact Check
  • News
  • Analysis
  • Subscriptions
  • Log in
Leveraging Our Differences
  • news & opinion
    • Big Picture
      • Civic Ed
      • Ethics
      • Leadership
      • Leveraging big ideas
      • Media
    • Business & Democracy
      • Corporate Responsibility
      • Impact Investment
      • Innovation & Incubation
      • Small Businesses
      • Stakeholder Capitalism
    • Elections
      • Campaign Finance
      • Independent Voter News
      • Redistricting
      • Voting
    • Government
      • Balance of Power
      • Budgeting
      • Congress
      • Judicial
      • Local
      • State
      • White House
    • Justice
      • Accountability
      • Anti-corruption
      • Budget equity
    • Columns
      • Beyond Right and Left
      • Civic Soul
      • Congress at a Crossroads
      • Cross-Partisan Visions
      • Democracy Pie
      • Our Freedom
  • Pop Culture
      • American Heroes
      • Ask Joe
      • Celebrity News
      • Comedy
      • Dance, Theatre & Film
      • Diversity, Inclusion & Belonging
      • Faithful & Mindful Living
      • Music, Poetry & Arts
      • Sports
      • Technology
      • Your Take
      • American Heroes
      • Ask Joe
      • Celebrity News
      • Comedy
      • Dance, Theatre & Film
      • Diversity, Inclusion & Belonging
      • Faithful & Mindful Living
      • Music, Poetry & Arts
      • Sports
      • Technology
      • Your Take
  • events
  • About
      • Mission
      • Advisory Board
      • Staff
      • Contact Us
Sign Up
  1. Home>
  2. health care>

AI could help remove bias from medical research and data

Robert Pearl
October 06, 2021
Researcher looks at mammography test

Artificial intelligence can help root out racial bias in health care, but only if the programmers can create the software so it doesn't make the same mistakes people make, like misreading mammograms results, writes Pearl.

Anne-Christine Poujoulat/AFP via Getty Images
Pearl is a clinical professor of plastic surgery at the Stanford University School of Medicine and is on the faculty of the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.

This is the second entry in a two-part op-ed series on institutional racism in American medicine.

A little over a year before the coronavirus pandemic reached our shores, the racism problem in U.S. health care was making big headlines.

But it wasn't doctors or nurses being accused of bias. Rather, a study published in Science concluded that a predictive health care algorithm had, itself, discriminated against Black patients.

The story originated with Optum, a subsidiary of insurance giant UnitedHealth Group, which had designed an application to identify high-risk patients with untreated chronic diseases. The company's ultimate goal was to help re-distribute medical resources to those who'd benefit most from added care. And to figure out who was most in need, Optum's algorithm assessed the cost of each patient's past treatments.

Unaccounted for in the algorithm's design was this essential fact: The average Black patient receives $1,800 less per year in total medical care than a white person with the same set of health problems. And, sure enough, when the researchers went back and re-ranked patients by their illnesses (rather than the cost of their care), the percentage of Black patients who should have been enrolled in specialized care programs jumped from 18 percent to 47 percent.

Sign up for The Fulcrum newsletter

Journalists and commentators pinned the blame for racial bias on Optum's algorithm. In reality, technology wasn't the problem. At issue were the doctors who failed to provide sufficient medical care to the Black patients in the first place. Meaning, the data was faulty because humans failed to provide equitable care.

Artificial intelligence and algorithmic approaches can only be as accurate, reliable and helpful as the data they're given. If the human inputs are unreliable, the data will be, as well.

Let's use the identification of breast cancer as an example. As much as one-third of the time, two radiologists looking at the same mammogram will disagree on the diagnosis. Therefore, if AI software were programmed to act like humans, the technology would be wrong one-third of the time.

Instead, AI can store and compare tens of thousands of mammogram images — comparing examples of women with cancer and without — to detect hundreds of subtle differences that humans often overlook. It can remember all those tiny differences when reviewing new mammograms, which is why AI is already estimated to be 10 percent more accurate than the average radiologist.

What AI can't recognize is whether it's being fed biased or incorrect information. Adjusting for bias in research and data aggregation requires that humans acknowledge their faulty assumptions and decisions, and then modify the inputs accordingly.

Correcting these types of errors should be standard practice by now. After all, any research project that seeks funding and publication is required to include an analysis of potential bias, based on the study's participants. As an example, investigators who want to compare people's health in two cities would be required to modify the study's design if they failed to account for major differences in age, education or other factors that might inappropriately tilt the results.

Given how often data is flawed, the possibility of racial bias should be explicitly factored into every AI project. With universities and funding agencies increasingly focused on racial issues in medicine, this expectation has the potential to become routine in the future. Once it is, AI will force researchers to confront bias in health care. As a result, the conclusions and recommendations they provide will be more accurate and equitable.

Thirteen months into the pandemic, Covid-19 continues to kill Black individuals at a rate three times higher than white people. For years, health plans and hospital leaders have talked about the need to address health disparities like these. And yet, despite good intentions, the solutions they put forth always look a lot like the failed efforts of the past.

Addressing systemic racism in medicine requires that we analyze far more data (all at once) than we do today. AI is the perfect application for this task. What we need is a national commitment to use these types of technologies to answer medicine's most urgent questions.

There is no antidote to the problem of racism in medicine. But combining AI with a national commitment to root out bias in health care would be a good start, putting our medical system on a path toward antiracism.

From Your Site Articles
  • Institutional racism exists in American health care - The Fulcrum ›
  • Political deepfake videos outlawed in California, Texas - The Fulcrum ›
  • Can democratic innovations reduce polarization? - The Fulcrum ›
Related Articles Around the Web
  • artificial intelligence | Definition, Examples, and Applications ... ›
  • What is Artificial Intelligence (AI)? - AI Definition and How it Works ›
  • Artificial Intelligence and Machine Learning in Software as a Medical ... ›
health care

Want to write
for The Fulcrum?

If you have something to say about ways to protect or repair our American democracy, we want to hear from you.

Submit
Get some Leverage Sign up for The Fulcrum Newsletter
Follow
Contributors

How a college freshman led the effort to honor titans of democracy reform

Jeremy Garson

Our poisonous age of absolutism

Jay Paterno

Re-imagining Title IX: An opportunity to flex our civic muscles

Lisa Kay Solomon

'Independent state legislature theory' is unconstitutional

Daniel O. Jamison

How afraid are we?

Debilyn Molineaux

Politicians certifying election results is risky and unnecessary

Kevin Johnson
latest News

How the anti-abortion movement shaped campaign finance law and paved the way for Trump

Amanda Becker, The 19th
24 June

Podcast: Journalist and political junkie Ken Rudin

Our Staff
24 June

A study in contrasts: Low-turnout runoffs vs. Alaska’s top-four, all-mail primary

David Meyers
23 June

Video: Team Democracy Urges Citizens to Sign SAFE Pledge

Our Staff
23 June

Podcast: Past, present, future

Our Staff
23 June

Video: America's vulnerable elections

Our Staff
22 June
Videos

Video: Memorial Day 2022

Our Staff

Video: Helping loved ones divided by politics

Our Staff

Video: What happened in Virginia?

Our Staff

Video: Infrastructure past, present, and future

Our Staff

Video: Beyond the headlines SCOTUS 2021 - 2022

Our Staff

Video: Should we even have a debt limit

Our Staff
Podcasts

Podcast: Did economists move the Democrats to the right?

Our Staff
02 May

Podcast: The future of depolarization

Our Staff
11 February

Podcast: Sore losers are bad for democracy

Our Staff
20 January

Deconstructed Podcast from IVN

Our Staff
08 November 2021
Recommended
Bridge Alliance intern Sachi Bajaj speaks at the June 12 Civvy Awards.

How a college freshman led the effort to honor titans of democracy reform

Leadership
abortion law historian Mary Ziegler

How the anti-abortion movement shaped campaign finance law and paved the way for Trump

Campaign Finance
Podcast: Journalist and political junkie Ken Rudin

Podcast: Journalist and political junkie Ken Rudin

Media
Abortion rights and anti-abortion protestors at the Supreme Court

Our poisonous age of absolutism

Big Picture
Virginia primary voter

A study in contrasts: Low-turnout runoffs vs. Alaska’s top-four, all-mail primary

Video: Team Democracy Urges Citizens to Sign SAFE Pledge

Video: Team Democracy Urges Citizens to Sign SAFE Pledge

Voting