Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Medical Schools Are Falling Behind in the Age of Generative AI

Opinion

Medical Schools Are Falling Behind in the Age of Generative AI

"To prepare tomorrow’s doctors, medical school deans, elected officials, and health care regulators must invest in training that matches the pace and promise of this technology," writes Dr. Robert Pearl.

Getty Images, ArtistGNDphotography

While colleges across the nation are adapting their curricula to harness the power of generative AI, U.S. medical schools remain dangerously behind.

Most students entering medicine today will graduate without ever being trained to use GenAI tools effectively. That must change. To prepare tomorrow’s doctors – and protect tomorrow’s patients – medical school deans, elected officials, and health care regulators must invest in training that matches the pace and promise of this technology.


Universities embrace AI as medical schools fall behind

Across the country, colleges and universities are reimagining how they educate students in the age of generative AI.

  • At Duke University, every new student receives a custom AI assistant dubbed DukeGPT.
  • At California State University, more than 460,000 students across 23 campuses now have access to a 24/7 ChatGPT toolkit.

These aren’t niche experiments. They’re part of a sweeping, systems-level transformation aimed at preparing graduates for a rapidly evolving workforce.

Most medical schools, however, have not kept pace. Instead of training students to apply modern tools toward clinical care, they continue to emphasize memorization — testing students on biochemical pathways and obscure facts rarely used in practice.

Early fears about plagiarism and declining academic rigor led many university departments to proceed cautiously after ChatGPT’s release in 2022. But since then, an increasing number of these educational institutions have shifted from policing AI to requiring faculty to incorporate GenAI into their coursework. And the American Federation of Teachers announced earlier this month that it would start an AI training hub for educators with $23 million from tech giants Microsoft, OpenAI, and Anthropic.

Medical education remains an outlier. A recent Educause study found that just 14% of medical schools have developed a formal GenAI curriculum, compared to 60% of undergraduate programs. Most medical school leaders and doctors still regard large language models as administrative aids rather than essential clinical tools.

This view is short-sighted. Within a few years, physicians will rely on generative AI to synthesize vast amounts of medical research, identify diagnostic patterns, and recommend treatment options tailored to the latest evidence. Patients will arrive at appointments already equipped with GenAI-assisted insights.

Used responsibly, generative AI can help prevent the 400,000 deaths each year from diagnostic errors, 250,000 deaths from preventable medical mistakes, and 500,000 deaths from poorly controlled chronic diseases. Elected officials and regulators need to support this life-saving approach.

How medical schools can catch up

In the past, medical students were evaluated on their ability to recall information. In the future, they will be judged by their ability to help AI-empowered patients manage chronic illnesses, prevent life-threatening disease complications, and maximize their health.

With generative AI capabilities doubling every year, matriculating medical students will be entering clinical practice equipped with tools over 30 times more powerful than today’s models. Yet few doctors will have received structured training on how to use them effectively.

Modernizing medical education starts with faculty training. Students entering medical school in 2025 will arrive already comfortable using generative AI tools like ChatGPT. Most instructors, however, will need to build that fluency.

To close this gap, academic leaders should provide faculty training programs before the start of the next academic year. These sessions would introduce educators to prompt engineering, output evaluation, and reliability assessment. These are foundational skills for teaching and applying GenAI in clinical scenarios.

Once faculty are prepared, schools would begin building case-based curricula that reflect modern clinical realities.

Sample Exercise: Managing chronic disease with GenAI support

In this scenario, students imagine seeing a 45-year-old man during a routine checkup. The patient has no prior medical problems, but on a physical exam, his blood pressure reads 140/100.

First, students walk through the traditional diagnostic process:

  • What additional history would they obtain?
  • Which physical findings warrant follow-up?
  • What laboratory tests would they order?
  • What treatment and follow-up plan would they recommend?

Next, they enter the same case into a generative AI tool and compare its output to their own. Where do they align? Where do they differ (and, importantly, why)?

Finally, students design a care plan that incorporates GenAI’s growing capabilities, such as:

  • Analyzing data from at-home blood pressure monitors.
  • Customizing educational guidance.
  • Enabling patients to actively manage their chronic diseases between visits.

This type of training – integrated alongside traditional curriculum – prepares future clinicians to master not just the technology but also understand how it can be used to transform medical care.

A call to government: Empower the next generation of physicians

Medical schools can’t do this alone. Because most physician training is funded through federal grants and Medicare-supported residency programs, meaningful reform will require coordinated leadership from academic institutions, government agencies, and lawmakers.

Preparing future doctors to use GenAI safely and effectively should be treated as a national imperative. Medicare will need to fund new educational initiatives, and agencies like the FDA must streamline the approval process for GenAI-assisted clinical applications.

This month, the Trump administration encouraged U.S. companies and nonprofits to develop AI training programs for schools, educators, and students. Leading tech companies — including Nvidia, Amazon, and Microsoft — quickly signed on.

If medical school deans demonstrate similar openness to innovation, we can expect policymakers and industry leaders to invest in medical education, too.

But if medical educators and government leaders hesitate, for-profit companies and private equity firms will fill the void. And they will use GenAI not to improve patient care but primarily to increase margins and drive revenue.

As deans prepare to welcome the class of 2029 (and as lawmakers face the growing costs of American health care), they must ask themselves:

Are we preparing students to practice yesterday’s medicine or to lead tomorrow’s?

Dr. Robert Pearl, the author of “ ChatGPT, MD,” teaches at both the Stanford University School of Medicine and the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.

Read More

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal
Getty Images, Kmatta

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal

Background

The Health Insurance Portability and Accountability Act (HIPAA) was enacted in 1996 to protect sensitive health information from being disclosed without patients’ consent. Under this act, a patient’s privacy is safeguarded through the enforcement of strict standards on managing, transmitting, and storing health information.

Keep ReadingShow less
Two people looking at screens.

A case for optimism, risk-taking, and policy experimentation in the age of AI—and why pessimism threatens technological progress.

Getty Images, Andriy Onufriyenko

In Defense of AI Optimism

Society needs people to take risks. Entrepreneurs who bet on themselves create new jobs. Institutions that gamble with new processes find out best to integrate advances into modern life. Regulators who accept potential backlash by launching policy experiments give us a chance to devise laws that are based on evidence, not fear.

The need for risk taking is all the more important when society is presented with new technologies. When new tech arrives on the scene, defense of the status quo is the easier path--individually, institutionally, and societally. We are all predisposed to think that the calamities, ailments, and flaws we experience today--as bad as they may be--are preferable to the unknowns tied to tomorrow.

Keep ReadingShow less
Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less