Skip to content
Search

Latest Stories

Top Stories

Generative AI Can Save Lives: Two Diverging Paths In Medicine

Doctor using AI technology
Akarapong Chairean/Getty Images

Generative AI is advancing at breakneck speed. Already, it’s outperforming doctors on national medical exams and in making difficult diagnoses. Microsoft recently reported that its latest AI system correctly diagnosed complex medical cases 85.5% of the time, compared to just 20% for physicians. OpenAI’s newly released GPT-5 model goes further still, delivering its most accurate and responsive performance yet on health-related queries.

As GenAI tools double in power annually, two distinct approaches are emerging for how they might help patients.


One path involves FDA-approved tools built by startups and established technology companies. The other empowers patients to safely use existing tools like ChatGPT, Gemini, and Claude.

Each path has advantages and tradeoffs. Both are likely to shape healthcare’s future.

To better understand what’s at stake, it’s first helpful to examine how generative AI differs from the FDA-approved technologies used in medicine today.

Narrow AI

Medicine has relied on “narrow AI” applications for more than two decades, using models trained to complete specific tasks with structured clinical data.

These tools are programmed to compare two data sets, identify subtle differences, and assign a precise probability factor to each. In radiology, for example, narrow AI models have been trained on thousands of mammograms to distinguish between those demonstrating early-stage breast cancer and those with benign conditions like fibrocystic disease. These tools can detect differences too subtle for the human eye, resulting in up to 20% greater diagnostic accuracy than doctors working alone.

Because narrow AI systems produce consistent, repeatable results, they fit neatly within the FDA’s current regulatory framework. Approval requires measurable data quality, algorithmic transparency, and reproducibility of outcomes.

Generative AI: A new kind of medical expertise

Generative AI models are built differently. Rather than being trained on structured datasets for specific tasks, they learn from the near-totality of internet-accessible content, including thousands of medical textbooks, academic journals, and real-world clinical data.

This breadth allows GenAI tools to answer virtually any medical question. But the large language model responses vary based on how users frame questions, prompt the model, and follow up for clarification. That variability makes it impossible for the FDA to evaluate the accuracy and quality of the tools.

Two distinct pathways are emerging to bring generative AI into clinical practice. Maximizing their impact will require the government to change how it evaluates and supports technological innovation.

1. The traditional path: FDA-approved, venture-backed

As medical costs rise and patient outcomes stagnate, private technology companies are racing to develop FDA-approved generative AI tools that can help with diagnosis, treatment, and disease management.

This approach mirrors the narrow AI model: high-priced tools that are highly regulated and largely dependent on insurance coverage for American families to afford them.

With venture funding, companies can fine-tune open-source foundation models (like DeepSeek or Meta’s LLaMA) using a process called “distillation.” This involves extracting domain-specific knowledge and retraining the model with real-world clinical experiences, such as tens of thousands of X-rays (including radiologists’ readings) or anonymized transcripts of patient-provider conversations.

Consider how this approach might impact diabetes management. Today, fewer than half of patients achieve adequate disease control. The consequences include hundreds of thousands of preventable heart attacks, kidney failures, and limb amputations each year. A generative AI tool trained specifically for diabetes could replicate the approach of a skilled chronic disease nurse: asking the right questions, interpreting patient data, and offering personalized guidance to help users better manage their blood sugar levels.

This path already appears to have federal backing. The Trump administration recently launched its Medicare-funded Health Tech Ecosystem initiative, partnering with more than 60 tech and healthcare firms to pilot AI-enabled tools for chronic disease management, including diabetes and obesity.

Although distillation is faster and cheaper than building an AI model from scratch, the timeline to FDA approval could still span several years and cost tens of millions of dollars. And any adverse outcome could expose companies to legal liability.

2. The alternate path: Empowering patients with GenAI expertise

This second model flips the innovation equation. Instead of relying on expensive, FDA-approved tools developed by private tech companies, it empowers patients to use low-cost, publicly available generative AI to manage their own health better. This can be accomplished through digital walkthroughs, printed guides, YouTube videos, or brief in-person sessions.

For example, a patient might input their blood pressure, glucose readings, or new symptoms and receive reliable, evidence-based advice from ChatGPT or Claude: whether a medication change is needed, when to alert their doctor, or if emergency care is warranted. Similarly, patients working with their physicians could use these LLMs to detect early signs of post-operative infection, worsening heart failure, or neurological decline.

With 40% of doctors already engaged in “gig work,” an ample supply of clinicians from every specialty would be available to contribute their expertise to develop these training tools.

This model would bypass the need for costly product development or FDA approval. And because it offers education, not direct medical care. It would create minimal legal liability.

Government support for both models

These approaches are not mutually exclusive. Both have the potential to improve care, reduce costs, and extend access. And both will benefit from targeted government support.

The traditional path will require companies to evaluate the reliability of their tools by testing the accuracy of their recommendations against clinicians. When these tools are equivalent, the FDA would give its approval.

The alternate path of educating patients to use existing large language models will benefit from educational grants and added expertise from agencies like the CDC and NIH, partnering with medical societies to develop, test, and distribute training materials. These public-private efforts would equip patients with the knowledge to use GenAI safely and effectively without waiting years for new products or approvals.

Together, these models offer a safer and more affordable future for American healthcare.

Robert Pearl, the author of “ChatGPT, MD,” teaches at both the Stanford University School of Medicine and the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.

Read More

What's the Difference Between Consequence Culture and State Censorship?

Jimmy Kimmel attends the 28th Annual UCLA Jonsson Cancer Center Foundation's "Taste For A Cure" event at Beverly Wilshire, A Four Seasons Hotel on May 02, 2025 in Beverly Hills, California.

(Photo by Tommaso Boddi/Getty Images for UCLA Jonsson Cancer Center Foundation)

What's the Difference Between Consequence Culture and State Censorship?

On a recent Tuesday night, viewers tuned in expecting the usual rhythm of late-night comedy: sharp jokes, a celebrity guest, and some comic relief before bed. Instead, they were met with silence. Jimmy Kimmel was yanked off the air after mocking Trump’s response to Charlie Kirk’s assassination, his remarks branded “offensive” by federal officials. Stephen Colbert fared no better. After skewering Trump’s wealth and his strongman posturing, his show was abruptly suspended. The message was unmistakable: any criticism of the president could now be grounds for cancellation.

These weren’t ratings decisions or advertiser tantrums. They were acts of political pressure. Regulators threatened fines and hinted at license reviews if the jokes continued. A hallmark of American democracy, the freedom to mock the powerful, was suddenly treated as a form of censorship.

Keep ReadingShow less
Censorship in Prime Time: Is The Authoritarian Playbook in Motion?
Fayl:Jimmy Kimmel June 2022.jpg - Vikipediya

Censorship in Prime Time: Is The Authoritarian Playbook in Motion?

ABC’s decision to pull Jimmy Kimmel Live! indefinitely has sent shockwaves through both the media and political worlds, with critics denouncing the move as censorship. “This isn’t right,” wrote actor Ben Stiller. California Governor Gavin Newsom went further, accusing the Republican Party of “censoring you in real time,” warning that “buying and controlling media platforms, firing commentators, canceling shows… it’s coordinated. And it’s dangerous.”

This isn’t just about one late-night host. It’s about a pattern—a six-step playbook used by authoritarian regimes to dismantle democratic institutions. And under President Donald Trump’s second term, critics say that playbook is being executed with alarming precision.

Keep ReadingShow less
When Good Intentions Kill Cures: A Warning on AI Regulation

Kevin Frazier warns that one-size-fits-all AI laws risk stifling innovation. Learn the 7 “sins” policymakers must avoid to protect progress.

Getty Images, Aitor Diago

When Good Intentions Kill Cures: A Warning on AI Regulation

Imagine it is 2028. A start-up in St. Louis trains an AI model that can spot pancreatic cancer six months earlier than the best radiologists, buying patients precious time that medicine has never been able to give them. But the model never leaves the lab. Why? Because a well-intentioned, technology-neutral state statute drafted in 2025 forces every “automated decision system” to undergo a one-size-fits-all bias audit, to be repeated annually, and to be performed only by outside experts who—three years in—still do not exist in sufficient numbers. While regulators scramble, the company’s venture funding dries up, the founders decamp to Singapore, and thousands of Americans are deprived of an innovation that would have saved their lives.

That grim vignette is fictional—so far. But it is the predictable destination of the seven “deadly sins” that already haunt our AI policy debates. Reactive politicians are at risk of passing laws that fly in the face of what qualifies as good policy for emerging technologies.

Keep ReadingShow less
Why Journalists Must Stand Firm in the Face of Threats to Democracy
a cup of coffee and a pair of glasses on a newspaper
Photo by Ashni on Unsplash

Why Journalists Must Stand Firm in the Face of Threats to Democracy

The United States is living through a moment of profound democratic vulnerability. I believe the Trump administration has worked in ways that weaken trust in our institutions, including one of democracy’s most essential pillars: a free and independent press. In my view, these are not abstract risks but deliberate attempts to discredit truth-telling. That is why, now more than ever, I think journalists must recommit themselves to their core duty of telling the truth, holding power to account, and giving voice to the people.

As journalists, I believe we do not exist to serve those in office. Our loyalty should be to the public, to the people who trust us with their stories, not to officials who often seek to mold the press to favor their agenda. To me, abandoning that principle would be to betray not just our profession but democracy itself.

Keep ReadingShow less