Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Why ChatGPT’s ‘memory’ will be a health care gamechanger

Woman looking at her smartwatch while exercising

Generative AI will have a pround impact on U.S. medicine writes Pearl. For example, home wearable devices will be able update to patients about their health status and suggest medication adjustments or lifestyle changes.

Guido Mieth/Getty Images

Pearl is a clinical professor of plastic surgery at the Stanford University School of Medicine and is on the faculty of the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.

OpenAI generated massive media interest with the announcement that its signature product, ChatGPT, is gaining memory. The new feature enables the generative artificial intelligence system to “carry what it learns between chats, allowing it to provide more relevant responses,” according to the company.

As Congress holds hearings and regulators rumble with apprehension, the media coverage so far has generally overlooked the biggest part of this announcement, which has direct ties to American health care:

The development of memory-powered AI is a pivotal step toward transforming U.S. medicine.


Although there are many technological and regulatory hurdles to clear — and fears around privacy and security to mitigate — this development has the potential to make health care more personalized, patient-centric and affordable. These improvements — alongside the potential pitfalls of AI-empowered health care — are the subject of my upcoming book, “ ChatGPT, MD: How AI-Empowered Patients & Doctors Can Take Back Control of American Medicine.”

Here are three ways generative AI’s improved memory will transform patient care:

More accurate diagnoses

For over a decade, clinicians have wanted to precisely tailor care to each patient’s unique health profile, including their genetic makeup and personal health preferences. But too much has stood in the way.

One major challenge is the sheer volume of knowledge required to customize medical care. The human genome consists of approximately 3 billion base pairs of DNA, which if typed out as letters would fill about 200 New York City phone books. What’s more, medical knowledge doubles every 73 days, making it almost impossible for any human to keep up with all the innovative medical findings and updated guidelines for helping patients.

A third hurdle is technological. With the average patient consulting 19 different doctors throughout their lifetime, an individual’s electronic medical records are often dispersed across numerous medical offices and health systems. The lack of interoperability among EMR systems compounds this issue, preventing clinicians —and, by extension, generative AI — from accessing a patient’s complete medical history.

Currently, ChatGPT’s “ context window ” (how many words it can recall before losing its memory), falls well short of the nearly 17,000 words found in the average patient’s medical record.

However, generative AI systems are predicted to become 30 times more powerful within the next five years, dramatically expanding their data retention capabilities and enhancing their reliability. This, combined with OpenAI’s specialized plug-ins (known as GPTs), offers promising opportunities. Initially, generative AI might access a limited set of patient data through platforms like My Chart, which can be used on personal computers or smartphones. Eventually, however, generative AI will enable patients to consolidate their digital medical records from various health care providers.

This will create a comprehensive, personalized health record, serving as a reliable resource for both patients and their health care teams.

With this information stored in an AI’s memory, patients will be able to input their symptoms and receive specific diagnostic and treatment suggestions.

For people who are uncertain about the significance or urgency of new symptoms, the AI will provide reliable advice. And for patients with rare or complex conditions, it will offer invaluable second opinions. Advanced diagnostic ability, alongside comprehensive health care information, will be instrumental in reducing the 400,000 annual deaths attributed to misdiagnoses.

Fewer complications from chronic sisease

Chronic diseases like diabetes, hypertension, obesity and asthma affect six in 10 U.S. adults. Complications from these diseases account for 1.7 million deaths each year.

Unlike acute illnesses that appear suddenly and usually are resolved quickly, chronic conditions persist over time, impacting tens of millions of Americans every single day.

Doctors care for these conditions in an episodic fashion, which is far from optimal. Patients with chronic diseases typically see their physician every three to four months, providing doctors with only a snapshot of their health status. As a result, chronic diseases aren’t controlled as well as they should be, which leads to life-threatening, and preventable, complications.

At a national level, hypertension is adequately controlled just 60 percent of the time, and effective blood sugar management in type 2 diabetes is achieved less than half the time. Data from the Centers for Disease Control and Prevention indicate that proper disease prevention and management approaches would reduce the risk of kidney failure, heart attacks and strokes by 30 percent to 50 percent.

Applying these percentages to the U.S. death toll from chronic disease complications, these CDC estimates indicate that more than half a million lives could be saved annually.

Generative AI, once connected to home wearable devices, can update patients about their health status and suggest medication adjustments or lifestyle changes. It can also remind them about necessary screenings and even facilitate testing appointments and transportation, thereby improving disease management, reducing complications and maximizing health outcomes.

Safer hospitals

Generative AI with memory will radically improve inpatient care, as well. Once it’s integrated with bedside monitors and able to remember a patient’s clinical status over time, the AI system will be able to immediately alert professionals when a problem arises, so they can intervene.

Additionally, video monitoring systems powered by AI could oversee the delivery of medical care, pinpointing any departures from established best practices. This real-time oversight would provide immediate alerts to caregivers, preventing medication mishaps and reducing the risk of infection.

These two uses of AI technology would help reduce the staggering 250,000 deaths each year attributed to preventable medical errors.

While ChatGPT and similar technologies hold immense potential, today’s generative AI tools still require clinician supervision. But looking ahead, the exponential growth of generative AI’s capabilities (doubling every year) points to a transformative future for the practice of medicine.

Now is the time for both clinicians and patients to become comfortable using generative AI. And it is an opportunity for regulators and elected officials to advance, not stifle its potential. With memory and GPTs, the doctor’s AI toolkit is quickly filling up.


Read More

AI, Reality, and the Pygmalion Effect: Why Human Judgment Still Matters
Woman typing on laptop at wooden table with breakfast.

AI, Reality, and the Pygmalion Effect: Why Human Judgment Still Matters

When the World goes Mad, one must accept Madness as Sanity, since Sanity is, in the last analysis, nothing but the Madness on which the Whole World happens to agree. (George Bernard Shaw)

Among the most prolific and famous playwrights of the 20th century, Shaw wrote “Pygmalion,” the play upon which “My Fair Lady” was based. Pygmalion was a Greek mythological figure, a sculptor from Cyprus, who fell in love with the statue he created. Aphrodite turned his sculpture into a real woman, promoting the idea that the “created” is greater than the “creator.”

Keep ReadingShow less
Humanoid Educators Will Widen Inequality—And Only Tech Overlords Will Benefit
a sign with a question mark and a question mark drawn on it

Humanoid Educators Will Widen Inequality—And Only Tech Overlords Will Benefit

In March, First Lady Melania Trump hosted an AI-powered humanoid robot at the White House during the Fostering the Future Together Global Coalition Summit, and introduced Plato, a humanoid educator marketed as a replacement for teachers that could homeschool children. A humanoid educator that speaks multiple languages, is always available, and draws on a vast store of information could expand access in meaningful ways. But the evidence suggests that the risks outweigh the benefits, that adoption will be uneven, and that the families most likely to adopt Plato will bear those risks disproportionately.

Research on excessive technology use in childhood has found consistent results. Young children and teenagers who spend too much time with screens are more likely to experience reduced physical activity, lower attention spans, depression, and social anxiety. On the same day that Melania Trump introduced Plato, a California jury ruled that Meta and YouTube contributed to anxiety and depression in a woman who began using social media at age 6, a reminder that the consequences of under-tested technology on children can be severe and long-lasting.

Keep ReadingShow less
An illustration of a block with the words, "AI," on it, surrounded by slightly smaller caution signs.

The future of AI should be measured by its impact on ordinary Americans—not just tech executives and investors. Exploring AI inequality, labor concerns, and responsible innovation.

Getty Images, J Studios

The Kayla Test: Exploring How AI Impacts Everyday Americans

We’re failing the Kayla Test and running out of time to pass it. Whether AI goes “well” for the country is not a question anyone in SF or DC can answer. To assess whether AI is truly advancing the interests of Americans, AI stakeholders must engage with more than power users, tokenmaxxers, and Fortune 500 CEOs. A better evaluation is to talk to folks like Kayla, my Lyft driver in Morgantown, WV, and find out what they think about AI. It's a test I stumbled upon while traveling from an AI event at the West Virginia University College of Law to one at Stanford Law.

Kayla asked me what I do for a living. I told her that I’m a law professor focused on AI policy. Those were the last words I said for the remainder of the ride to the airport.

Keep ReadingShow less
Close up of a person on their phone at night.

From “Patriot Games” to The Hunger Games, how spectacle, social media, and political culture risk normalizing violence and eroding empathy.

Getty Images, Westend61

The Capitol Is Counting on Us to Laugh

When the Trump administration announced the Patriot Games, many people laughed. Selecting two children per state for a nationally televised sports competition looked too much like Suzanne Collins’ Hunger Games to take seriously. But that instinct, to laugh rather than look closer, is one the Capitol is counting on. It has always been easier to normalize violence when it arrives dressed as entertainment or patriotism.

Here’s what I mean: The Hunger Games starts with the reaping, the moment when a Capitol official selects two children, one boy and one girl, to fight to the death against tributes from every other district. The games were created as an annual reminder of a failed rebellion, to remind the districts that dissent has consequences. At first, many Capitol residents saw the games as a just punishment. But sentiments shifted as the spectacle grew—when citizens could bet on winners, when a death march transformed into a beauty pageant, when murder became a pathway to celebrity.

Keep ReadingShow less