Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Medical Schools Are Falling Behind in the Age of Generative AI

Opinion

Medical Schools Are Falling Behind in the Age of Generative AI

"To prepare tomorrow’s doctors, medical school deans, elected officials, and health care regulators must invest in training that matches the pace and promise of this technology," writes Dr. Robert Pearl.

Getty Images, ArtistGNDphotography

While colleges across the nation are adapting their curricula to harness the power of generative AI, U.S. medical schools remain dangerously behind.

Most students entering medicine today will graduate without ever being trained to use GenAI tools effectively. That must change. To prepare tomorrow’s doctors – and protect tomorrow’s patients – medical school deans, elected officials, and health care regulators must invest in training that matches the pace and promise of this technology.


Universities embrace AI as medical schools fall behind

Across the country, colleges and universities are reimagining how they educate students in the age of generative AI.

  • At Duke University, every new student receives a custom AI assistant dubbed DukeGPT.
  • At California State University, more than 460,000 students across 23 campuses now have access to a 24/7 ChatGPT toolkit.

These aren’t niche experiments. They’re part of a sweeping, systems-level transformation aimed at preparing graduates for a rapidly evolving workforce.

Most medical schools, however, have not kept pace. Instead of training students to apply modern tools toward clinical care, they continue to emphasize memorization — testing students on biochemical pathways and obscure facts rarely used in practice.

Early fears about plagiarism and declining academic rigor led many university departments to proceed cautiously after ChatGPT’s release in 2022. But since then, an increasing number of these educational institutions have shifted from policing AI to requiring faculty to incorporate GenAI into their coursework. And the American Federation of Teachers announced earlier this month that it would start an AI training hub for educators with $23 million from tech giants Microsoft, OpenAI, and Anthropic.

Medical education remains an outlier. A recent Educause study found that just 14% of medical schools have developed a formal GenAI curriculum, compared to 60% of undergraduate programs. Most medical school leaders and doctors still regard large language models as administrative aids rather than essential clinical tools.

This view is short-sighted. Within a few years, physicians will rely on generative AI to synthesize vast amounts of medical research, identify diagnostic patterns, and recommend treatment options tailored to the latest evidence. Patients will arrive at appointments already equipped with GenAI-assisted insights.

Used responsibly, generative AI can help prevent the 400,000 deaths each year from diagnostic errors, 250,000 deaths from preventable medical mistakes, and 500,000 deaths from poorly controlled chronic diseases. Elected officials and regulators need to support this life-saving approach.

How medical schools can catch up

In the past, medical students were evaluated on their ability to recall information. In the future, they will be judged by their ability to help AI-empowered patients manage chronic illnesses, prevent life-threatening disease complications, and maximize their health.

With generative AI capabilities doubling every year, matriculating medical students will be entering clinical practice equipped with tools over 30 times more powerful than today’s models. Yet few doctors will have received structured training on how to use them effectively.

Modernizing medical education starts with faculty training. Students entering medical school in 2025 will arrive already comfortable using generative AI tools like ChatGPT. Most instructors, however, will need to build that fluency.

To close this gap, academic leaders should provide faculty training programs before the start of the next academic year. These sessions would introduce educators to prompt engineering, output evaluation, and reliability assessment. These are foundational skills for teaching and applying GenAI in clinical scenarios.

Once faculty are prepared, schools would begin building case-based curricula that reflect modern clinical realities.

Sample Exercise: Managing chronic disease with GenAI support

In this scenario, students imagine seeing a 45-year-old man during a routine checkup. The patient has no prior medical problems, but on a physical exam, his blood pressure reads 140/100.

First, students walk through the traditional diagnostic process:

  • What additional history would they obtain?
  • Which physical findings warrant follow-up?
  • What laboratory tests would they order?
  • What treatment and follow-up plan would they recommend?

Next, they enter the same case into a generative AI tool and compare its output to their own. Where do they align? Where do they differ (and, importantly, why)?

Finally, students design a care plan that incorporates GenAI’s growing capabilities, such as:

  • Analyzing data from at-home blood pressure monitors.
  • Customizing educational guidance.
  • Enabling patients to actively manage their chronic diseases between visits.

This type of training – integrated alongside traditional curriculum – prepares future clinicians to master not just the technology but also understand how it can be used to transform medical care.

A call to government: Empower the next generation of physicians

Medical schools can’t do this alone. Because most physician training is funded through federal grants and Medicare-supported residency programs, meaningful reform will require coordinated leadership from academic institutions, government agencies, and lawmakers.

Preparing future doctors to use GenAI safely and effectively should be treated as a national imperative. Medicare will need to fund new educational initiatives, and agencies like the FDA must streamline the approval process for GenAI-assisted clinical applications.

This month, the Trump administration encouraged U.S. companies and nonprofits to develop AI training programs for schools, educators, and students. Leading tech companies — including Nvidia, Amazon, and Microsoft — quickly signed on.

If medical school deans demonstrate similar openness to innovation, we can expect policymakers and industry leaders to invest in medical education, too.

But if medical educators and government leaders hesitate, for-profit companies and private equity firms will fill the void. And they will use GenAI not to improve patient care but primarily to increase margins and drive revenue.

As deans prepare to welcome the class of 2029 (and as lawmakers face the growing costs of American health care), they must ask themselves:

Are we preparing students to practice yesterday’s medicine or to lead tomorrow’s?

Dr. Robert Pearl, the author of “ ChatGPT, MD,” teaches at both the Stanford University School of Medicine and the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.

Read More

An illustration of a block with the words, "AI," on it, surrounded by slightly smaller caution signs.

The future of AI should be measured by its impact on ordinary Americans—not just tech executives and investors. Exploring AI inequality, labor concerns, and responsible innovation.

Getty Images, J Studios

The Kayla Test: Exploring How AI Impacts Everyday Americans

We’re failing the Kayla Test and running out of time to pass it. Whether AI goes “well” for the country is not a question anyone in SF or DC can answer. To assess whether AI is truly advancing the interests of Americans, AI stakeholders must engage with more than power users, tokenmaxxers, and Fortune 500 CEOs. A better evaluation is to talk to folks like Kayla, my Lyft driver in Morgantown, WV, and find out what they think about AI. It's a test I stumbled upon while traveling from an AI event at the West Virginia University College of Law to one at Stanford Law.

Kayla asked me what I do for a living. I told her that I’m a law professor focused on AI policy. Those were the last words I said for the remainder of the ride to the airport.

Keep ReadingShow less
Close up of a person on their phone at night.

From “Patriot Games” to The Hunger Games, how spectacle, social media, and political culture risk normalizing violence and eroding empathy.

Getty Images, Westend61

The Capitol Is Counting on Us to Laugh

When the Trump administration announced the Patriot Games, many people laughed. Selecting two children per state for a nationally televised sports competition looked too much like Suzanne Collins’ Hunger Games to take seriously. But that instinct, to laugh rather than look closer, is one the Capitol is counting on. It has always been easier to normalize violence when it arrives dressed as entertainment or patriotism.

Here’s what I mean: The Hunger Games starts with the reaping, the moment when a Capitol official selects two children, one boy and one girl, to fight to the death against tributes from every other district. The games were created as an annual reminder of a failed rebellion, to remind the districts that dissent has consequences. At first, many Capitol residents saw the games as a just punishment. But sentiments shifted as the spectacle grew—when citizens could bet on winners, when a death march transformed into a beauty pageant, when murder became a pathway to celebrity.

Keep ReadingShow less
Technology and Presidential Election

Anthropic’s Mythos AI raises alarms about surveillance, deepfakes, and democracy. Why urgent AI regulation is needed as U.S. policy struggles to keep pace.

Getty Images, Douglas Rissing

How the Latest in AI Threatens Democracy

On April 24, America got a wake-up call from Anthropic, one of the nation’s leading artificial intelligence companies. It announced a new AI tool, called Mythos, that can identify flaws in computer networks and software systems that, as Politico puts it, “Even the brightest human minds have been unable to identify.”

A machine smarter than the “brightest human minds” sounds like a line from a dystopian science fiction movie. And if that weren’t scary enough, we now have a government populated by people who seem oblivious to the risks AI poses to democracy and humanity itself.

Keep ReadingShow less
Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate
the letters are made up of different colors

Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate

This nonpartisan policy brief, written by an ACE fellow, is republished by The Fulcrum as part of our partnership with the Alliance for Civic Engagement and our NextGen initiative — elevating student voices, strengthening civic education, and helping readers better understand democracy and public policy.

Key takeaways

  • The U.S. has no national AI liability law. Instead, a patchwork of state laws has emerged which has resulted in legal protections being dependent on where an individual resides.
  • It’s often unclear who is legally responsible when AI causes harm. This gap leaves many people with no clear path to seek help.
  • In March 2026, the White House and Congress introduced major proposals to establish a federal standard, but there is significant disagreement about whether that standard should prioritize protecting innovation or protecting people harmed by AI systems.

Background: A Patchwork of State Laws

Without a national AI law, states have been filling in the gaps on their own. The result is an uneven landscape where a person’s legal protections depend entirely on which state they live in.

Keep ReadingShow less