Skip to content
Search

Latest Stories

Top Stories

Medical Schools Are Falling Behind in the Age of Generative AI

Medical Schools Are Falling Behind in the Age of Generative AI

"To prepare tomorrow’s doctors, medical school deans, elected officials, and health care regulators must invest in training that matches the pace and promise of this technology," writes Dr. Robert Pearl.

Getty Images, ArtistGNDphotography

While colleges across the nation are adapting their curricula to harness the power of generative AI, U.S. medical schools remain dangerously behind.

Most students entering medicine today will graduate without ever being trained to use GenAI tools effectively. That must change. To prepare tomorrow’s doctors – and protect tomorrow’s patients – medical school deans, elected officials, and health care regulators must invest in training that matches the pace and promise of this technology.


Universities embrace AI as medical schools fall behind

Across the country, colleges and universities are reimagining how they educate students in the age of generative AI.

  • At Duke University, every new student receives a custom AI assistant dubbed DukeGPT.
  • At California State University, more than 460,000 students across 23 campuses now have access to a 24/7 ChatGPT toolkit.

These aren’t niche experiments. They’re part of a sweeping, systems-level transformation aimed at preparing graduates for a rapidly evolving workforce.

Most medical schools, however, have not kept pace. Instead of training students to apply modern tools toward clinical care, they continue to emphasize memorization — testing students on biochemical pathways and obscure facts rarely used in practice.

Early fears about plagiarism and declining academic rigor led many university departments to proceed cautiously after ChatGPT’s release in 2022. But since then, an increasing number of these educational institutions have shifted from policing AI to requiring faculty to incorporate GenAI into their coursework. And the American Federation of Teachers announced earlier this month that it would start an AI training hub for educators with $23 million from tech giants Microsoft, OpenAI, and Anthropic.

Medical education remains an outlier. A recent Educause study found that just 14% of medical schools have developed a formal GenAI curriculum, compared to 60% of undergraduate programs. Most medical school leaders and doctors still regard large language models as administrative aids rather than essential clinical tools.

This view is short-sighted. Within a few years, physicians will rely on generative AI to synthesize vast amounts of medical research, identify diagnostic patterns, and recommend treatment options tailored to the latest evidence. Patients will arrive at appointments already equipped with GenAI-assisted insights.

Used responsibly, generative AI can help prevent the 400,000 deaths each year from diagnostic errors, 250,000 deaths from preventable medical mistakes, and 500,000 deaths from poorly controlled chronic diseases. Elected officials and regulators need to support this life-saving approach.

How medical schools can catch up

In the past, medical students were evaluated on their ability to recall information. In the future, they will be judged by their ability to help AI-empowered patients manage chronic illnesses, prevent life-threatening disease complications, and maximize their health.

With generative AI capabilities doubling every year, matriculating medical students will be entering clinical practice equipped with tools over 30 times more powerful than today’s models. Yet few doctors will have received structured training on how to use them effectively.

Modernizing medical education starts with faculty training. Students entering medical school in 2025 will arrive already comfortable using generative AI tools like ChatGPT. Most instructors, however, will need to build that fluency.

To close this gap, academic leaders should provide faculty training programs before the start of the next academic year. These sessions would introduce educators to prompt engineering, output evaluation, and reliability assessment. These are foundational skills for teaching and applying GenAI in clinical scenarios.

Once faculty are prepared, schools would begin building case-based curricula that reflect modern clinical realities.

Sample Exercise: Managing chronic disease with GenAI support

In this scenario, students imagine seeing a 45-year-old man during a routine checkup. The patient has no prior medical problems, but on a physical exam, his blood pressure reads 140/100.

First, students walk through the traditional diagnostic process:

  • What additional history would they obtain?
  • Which physical findings warrant follow-up?
  • What laboratory tests would they order?
  • What treatment and follow-up plan would they recommend?

Next, they enter the same case into a generative AI tool and compare its output to their own. Where do they align? Where do they differ (and, importantly, why)?

Finally, students design a care plan that incorporates GenAI’s growing capabilities, such as:

  • Analyzing data from at-home blood pressure monitors.
  • Customizing educational guidance.
  • Enabling patients to actively manage their chronic diseases between visits.

This type of training – integrated alongside traditional curriculum – prepares future clinicians to master not just the technology but also understand how it can be used to transform medical care.

A call to government: Empower the next generation of physicians

Medical schools can’t do this alone. Because most physician training is funded through federal grants and Medicare-supported residency programs, meaningful reform will require coordinated leadership from academic institutions, government agencies, and lawmakers.

Preparing future doctors to use GenAI safely and effectively should be treated as a national imperative. Medicare will need to fund new educational initiatives, and agencies like the FDA must streamline the approval process for GenAI-assisted clinical applications.

This month, the Trump administration encouraged U.S. companies and nonprofits to develop AI training programs for schools, educators, and students. Leading tech companies — including Nvidia, Amazon, and Microsoft — quickly signed on.

If medical school deans demonstrate similar openness to innovation, we can expect policymakers and industry leaders to invest in medical education, too.

But if medical educators and government leaders hesitate, for-profit companies and private equity firms will fill the void. And they will use GenAI not to improve patient care but primarily to increase margins and drive revenue.

As deans prepare to welcome the class of 2029 (and as lawmakers face the growing costs of American health care), they must ask themselves:

Are we preparing students to practice yesterday’s medicine or to lead tomorrow’s?

Dr. Robert Pearl, the author of “ ChatGPT, MD,” teaches at both the Stanford University School of Medicine and the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.

Read More

Avoiding Policy Malpractice in the Age of AI

"The stakes of AI policymaking are too high and the risks of getting it wrong are too enduring for lawmakers to legislate on instinct alone," explains Kevin Frazier.

Getty Images, Aitor Diago

Avoiding Policy Malpractice in the Age of AI

Nature abhors a vacuum, rushing to fill it often chaotically. Policymakers, similarly, dislike a regulatory void. The urge to fill it with new laws is strong, frequently leading to shortsighted legislation. There's a common, if flawed, belief that "any law is better than no law." This action bias—our predisposition to do something rather than nothing—might be forgivable in some contexts, but not when it comes to artificial intelligence.

Regardless of one's stance on AI regulation, we should all agree that only effective policy deserves to stay on the books. The consequences of missteps in AI policy at this early stage are too severe to entrench poorly designed proposals into law. Once enacted, laws tend to persist. We even have a term for them: zombie laws. These are "statutes, regulations, and judicial precedents that continue to apply after their underlying economic and legal bases dissipate," as defined by Professor Joshua Macey.

Keep ReadingShow less
Donald Trump

There's been an emerging pattern of the Trump administration embracing AI-generated propaganda art in official communications.

Brandon Bell/Getty Images

Superman Trump and the White House’s AI Art Problem

On July 10, the White House’s official social media accounts posted a mock movie poster depicting President Donald Trump as Superman, soaring through the air in the Man of Steel’s iconic tights and cape. The meme, emblazoned with the slogan, “THE SYMBOL OF HOPE. TRUTH. JUSTICE. THE AMERICAN WAY. SUPERMAN TRUMP,” was intended to capitalize on the buzz around a new Superman film. Instead, it was met with widespread ridicule; one congressman quipped that Trump is “literally Lex Luthor.” But, while easy to write off as a one-time social media gaff, the bizarre incident wasn’t an isolated one. It highlights an emerging pattern of the administration embracing AI-generated propaganda art in official communications.

A Pattern of AI-Generated Fantasies

Keep ReadingShow less
Biased Coverage Distorts the Historical Record We Later Inherit
white printer paper on black table
Photo by Ashni on Unsplash

Biased Coverage Distorts the Historical Record We Later Inherit

I used to enjoy doing my schoolwork in my college newspaper’s office. There is a series of tall library shelves filled with dusty books held together by loose binding that contain every article printed since our inception in the 1930s.

The book covers have lost the sharpness of their hues over time, and the thin old papers inside are yellow and torn, but inside those books lie almost 100 years of articles that tell the stories and history of the college town, Isla Vista, and UC Santa Barbara, as written by student journalists at the Daily Nexus.

Keep ReadingShow less