This is the second entry in a two-part op-ed series on institutional racism in American medicine.
A little over a year before the coronavirus pandemic reached our shores, the racism problem in U.S. health care was making big headlines.
But it wasn't doctors or nurses being accused of bias. Rather, a study published in Science concluded that a predictive health care algorithm had, itself, discriminated against Black patients.
The story originated with Optum, a subsidiary of insurance giant UnitedHealth Group, which had designed an application to identify high-risk patients with untreated chronic diseases. The company's ultimate goal was to help re-distribute medical resources to those who'd benefit most from added care. And to figure out who was most in need, Optum's algorithm assessed the cost of each patient's past treatments.
Unaccounted for in the algorithm's design was this essential fact: The average Black patient receives $1,800 less per year in total medical care than a white person with the same set of health problems. And, sure enough, when the researchers went back and re-ranked patients by their illnesses (rather than the cost of their care), the percentage of Black patients who should have been enrolled in specialized care programs jumped from 18 percent to 47 percent.
Journalists and commentators pinned the blame for racial bias on Optum's algorithm. In reality, technology wasn't the problem. At issue were the doctors who failed to provide sufficient medical care to the Black patients in the first place. Meaning, the data was faulty because humans failed to provide equitable care.
Artificial intelligence and algorithmic approaches can only be as accurate, reliable and helpful as the data they're given. If the human inputs are unreliable, the data will be, as well.
Let's use the identification of breast cancer as an example. As much as one-third of the time, two radiologists looking at the same mammogram will disagree on the diagnosis. Therefore, if AI software were programmed to act like humans, the technology would be wrong one-third of the time.
Instead, AI can store and compare tens of thousands of mammogram images — comparing examples of women with cancer and without — to detect hundreds of subtle differences that humans often overlook. It can remember all those tiny differences when reviewing new mammograms, which is why AI is already estimated to be 10 percent more accurate than the average radiologist.
What AI can't recognize is whether it's being fed biased or incorrect information. Adjusting for bias in research and data aggregation requires that humans acknowledge their faulty assumptions and decisions, and then modify the inputs accordingly.
Correcting these types of errors should be standard practice by now. After all, any research project that seeks funding and publication is required to include an analysis of potential bias, based on the study's participants. As an example, investigators who want to compare people's health in two cities would be required to modify the study's design if they failed to account for major differences in age, education or other factors that might inappropriately tilt the results.
Given how often data is flawed, the possibility of racial bias should be explicitly factored into every AI project. With universities and funding agencies increasingly focused on racial issues in medicine, this expectation has the potential to become routine in the future. Once it is, AI will force researchers to confront bias in health care. As a result, the conclusions and recommendations they provide will be more accurate and equitable.
Thirteen months into the pandemic, Covid-19 continues to kill Black individuals at a rate three times higher than white people. For years, health plans and hospital leaders have talked about the need to address health disparities like these. And yet, despite good intentions, the solutions they put forth always look a lot like the failed efforts of the past.
Addressing systemic racism in medicine requires that we analyze far more data (all at once) than we do today. AI is the perfect application for this task. What we need is a national commitment to use these types of technologies to answer medicine's most urgent questions.
There is no antidote to the problem of racism in medicine. But combining AI with a national commitment to root out bias in health care would be a good start, putting our medical system on a path toward antiracism.



















photo courtesy of Michael Varga.
An Independent Voter's Perspective on Current Political Divides
In the column, "Is Donald Trump Right?", Fulcrum Executive Editor, Hugo Balta, wrote:
For millions of Americans, President Trump’s second term isn’t a threat to democracy—it’s the fulfillment of a promise they believe was long overdue.
Is Donald Trump right?
Should the presidency serve as a force for disruption or a safeguard of preservation?
Balta invited readers to share their thoughts at newsroom@fulcrum.us.
David Levine from Portland, Oregon, shared these thoughts...
I am an independent voter who voted for Kamala Harris in the last election.
I pay very close attention to the events going on, and I try and avoid taking other people's opinions as fact, so the following writing should be looked at with that in mind:
Is Trump right? On some things, absolutely.
As to DEI, there is a strong feeling that you cannot fight racism with more racism or sexism with more sexism. Standards have to be the same across the board, and the idea that only white people can be racist is one that I think a lot of us find delusional on its face. The question is not whether we want equality in the workplace, but whether these systems are the mechanism to achieve it, despite their claims to virtue, and many of us feel they are not.
I think if the Democrats want to take back immigration as an issue then every single illegal alien no matter how they are discovered needs to be processed and sanctuary cities need to end, every single illegal alien needs to be found at that point Democrats could argue for an amnesty for those who have shown they have been Good actors for a period of time but the dynamic of simply ignoring those who break the law by coming here illegally is I think a losing issue for the Democrats, they need to bend the knee and make a deal.
I think you have to quit calling the man Hitler or a fascist because an actual fascist would simply shoot the protesters, the journalists, and anyone else who challenges him. And while he definitely has authoritarian tendencies, the Democrats are overplaying their hand using those words, and it makes them look foolish.
Most of us understand that the tariffs are a game of economic chicken, and whether it is successful or not depends on who blinks before the midterms. Still, the Democrats' continuous attacks on the man make them look disloyal to the country, not to Trump.
Referring to any group of people as marginalized is to many of us the same as referring to them as lesser, and it seems racist and insulting.
We invite you to read the opinions of other Fulrum Readers:
Trump's Policies: A Threat to Farmers and American Values
The Trump Era: A Bitter Pill for American Renewal
Federal Hill's Warning: A Baltimorean's Reflection on Leadership
Also, check out "Is Donald Trump Right?" and consider accepting Hugo's invitation to share your thoughts at newsroom@fulcrum.us.
The Fulcrum will select a range of submissions to share with readers as part of our ongoing civic dialogue.
We offer this platform for discussion and debate.