This is the second entry in a two-part op-ed series on institutional racism in American medicine.
A little over a year before the coronavirus pandemic reached our shores, the racism problem in U.S. health care was making big headlines.
But it wasn't doctors or nurses being accused of bias. Rather, a study published in Science concluded that a predictive health care algorithm had, itself, discriminated against Black patients.
The story originated with Optum, a subsidiary of insurance giant UnitedHealth Group, which had designed an application to identify high-risk patients with untreated chronic diseases. The company's ultimate goal was to help re-distribute medical resources to those who'd benefit most from added care. And to figure out who was most in need, Optum's algorithm assessed the cost of each patient's past treatments.
Unaccounted for in the algorithm's design was this essential fact: The average Black patient receives $1,800 less per year in total medical care than a white person with the same set of health problems. And, sure enough, when the researchers went back and re-ranked patients by their illnesses (rather than the cost of their care), the percentage of Black patients who should have been enrolled in specialized care programs jumped from 18 percent to 47 percent.
Journalists and commentators pinned the blame for racial bias on Optum's algorithm. In reality, technology wasn't the problem. At issue were the doctors who failed to provide sufficient medical care to the Black patients in the first place. Meaning, the data was faulty because humans failed to provide equitable care.
Artificial intelligence and algorithmic approaches can only be as accurate, reliable and helpful as the data they're given. If the human inputs are unreliable, the data will be, as well.
Let's use the identification of breast cancer as an example. As much as one-third of the time, two radiologists looking at the same mammogram will disagree on the diagnosis. Therefore, if AI software were programmed to act like humans, the technology would be wrong one-third of the time.
Instead, AI can store and compare tens of thousands of mammogram images — comparing examples of women with cancer and without — to detect hundreds of subtle differences that humans often overlook. It can remember all those tiny differences when reviewing new mammograms, which is why AI is already estimated to be 10 percent more accurate than the average radiologist.
What AI can't recognize is whether it's being fed biased or incorrect information. Adjusting for bias in research and data aggregation requires that humans acknowledge their faulty assumptions and decisions, and then modify the inputs accordingly.
Correcting these types of errors should be standard practice by now. After all, any research project that seeks funding and publication is required to include an analysis of potential bias, based on the study's participants. As an example, investigators who want to compare people's health in two cities would be required to modify the study's design if they failed to account for major differences in age, education or other factors that might inappropriately tilt the results.
Given how often data is flawed, the possibility of racial bias should be explicitly factored into every AI project. With universities and funding agencies increasingly focused on racial issues in medicine, this expectation has the potential to become routine in the future. Once it is, AI will force researchers to confront bias in health care. As a result, the conclusions and recommendations they provide will be more accurate and equitable.
Thirteen months into the pandemic, Covid-19 continues to kill Black individuals at a rate three times higher than white people. For years, health plans and hospital leaders have talked about the need to address health disparities like these. And yet, despite good intentions, the solutions they put forth always look a lot like the failed efforts of the past.
Addressing systemic racism in medicine requires that we analyze far more data (all at once) than we do today. AI is the perfect application for this task. What we need is a national commitment to use these types of technologies to answer medicine's most urgent questions.
There is no antidote to the problem of racism in medicine. But combining AI with a national commitment to root out bias in health care would be a good start, putting our medical system on a path toward antiracism.




















Eric Trump, the newly appointed ALT5 board director of World Liberty Financial, walks outside of the NASDAQ in Times Square as they mark the $1.5- billion partnership between World Liberty Financial and ALT5 Sigma with the ringing of the NASDAQ opening bell, on Aug. 13, 2025, in New York City.
Why does the Trump family always get a pass?
Deputy Attorney General Todd Blanche joined ABC’s “This Week” on Sunday to defend or explain a lot of controversies for the Trump administration: the Epstein files release, the events in Minneapolis, etc. He was also asked about possible conflicts of interest between President Trump’s family business and his job. Specifically, Blanche was asked about a very sketchy deal Trump’s son Eric signed with the UAE’s national security adviser, Sheikh Tahnoon.
Shortly before Trump was inaugurated in early 2025, Tahnoon invested $500 million in the Trump-owned World Liberty, a then newly launched cryptocurrency outfit. A few months later, UAE was granted permission to purchase sensitive American AI chips. According to the Wall Street Journal, which broke the story, “the deal marks something unprecedented in American politics: a foreign government official taking a major ownership stake in an incoming U.S. president’s company.”
“How do you respond to those who say this is a serious conflict of interest?” ABC host George Stephanopoulos asked.
“I love it when these papers talk about something being unprecedented or never happening before,” Blanche replied, “as if the Biden family and the Biden administration didn’t do exactly the same thing, and they were just in office.”
Blanche went on to boast about how the president is utterly transparent regarding his questionable business practices: “I don’t have a comment on it beyond Trump has been completely transparent when his family travels for business reasons. They don’t do so in secret. We don’t learn about it when we find a laptop a few years later. We learn about it when it’s happening.”
Sadly, Stephanopoulos didn’t offer the obvious response, which may have gone something like this: “OK, but the president and countless leading Republicans insisted that President Biden was the head of what they dubbed ‘the Biden Crime family’ and insisted his business dealings were corrupt, and indeed that his corruption merited impeachment. So how is being ‘transparent’ about similar corruption a defense?”
Now, I should be clear that I do think the Biden family’s business dealings were corrupt, whether or not laws were broken. Others disagree. I also think Trump’s business dealings appear to be worse in many ways than even what Biden was alleged to have done. But none of that is relevant. The standard set by Trump and Republicans is the relevant political standard, and by the deputy attorney general’s own account, the Trump administration is doing “exactly the same thing,” just more openly.
Since when is being more transparent about wrongdoing a defense? Try telling a cop or judge, “Yes, I robbed that bank. I’ve been completely transparent about that. So, what’s the big deal?”
This is just a small example of the broader dysfunction in the way we talk about politics.
Americans have a special hatred for hypocrisy. I think it goes back to the founding era. As Alexis de Tocqueville observed in “Democracy In America,” the old world had a different way of dealing with the moral shortcomings of leaders. Rank had its privileges. Nobles, never mind kings, were entitled to behave in ways that were forbidden to the little people.
In America, titles of nobility were banned in the Constitution and in our democratic culture. In a society built on notions of equality (the obvious exceptions of Black people, women, Native Americans notwithstanding) no one has access to special carve-outs or exemptions as to what is right and wrong. Claiming them, particularly in secret, feels like a betrayal against the whole idea of equality.
The problem in the modern era is that elites — of all ideological stripes — have violated that bargain. The result isn’t that we’ve abandoned any notion of right and wrong. Instead, by elevating hypocrisy to the greatest of sins, we end up weaponizing the principles, using them as a cudgel against the other side but not against our own.
Pick an issue: violent rhetoric by politicians, sexual misconduct, corruption and so on. With every revelation, almost immediately the debate becomes a riot of whataboutism. Team A says that Team B has no right to criticize because they did the same thing. Team B points out that Team A has switched positions. Everyone has a point. And everyone is missing the point.
Sure, hypocrisy is a moral failing, and partisan inconsistency is an intellectual one. But neither changes the objective facts. This is something you’re supposed to learn as a child: It doesn’t matter what everyone else is doing or saying, wrong is wrong. It’s also something lawyers like Mr. Blanche are supposed to know. Telling a judge that the hypocrisy of the prosecutor — or your client’s transparency — means your client did nothing wrong would earn you nothing but a laugh.
Jonah Goldberg is editor-in-chief of The Dispatch and the host of The Remnant podcast. His Twitter handle is @JonahDispatch.