Skip to content
Search

Latest Stories

Follow Us:
Top Stories

GenAI will save lives—if properly applied

Opinion

GenAI will save lives—if properly applied

A medical professional and the word "AI".

Getty Images, Toowongsa Anurak

In medicine, rare moments arise when technological breakthroughs and shifts in leadership create an opportunity for sweeping change. The United States now stands at that crossroad.

A major advance in artificial intelligence, combined with a shake-up at the highest levels of federal healthcare leadership, has the potential to save hundreds of thousands of lives, make medical care affordable and ease the burnout crisis among doctors and nurses.


But there’s a risk this potential will go unrealized. The newly appointed Secretary of Health and Human Services (HHS) Robert F. Kennedy Jr. and Dr. Marty Makary, the incoming head of the Food and Drug Administration (FDA), must move swiftly to capitalize on what U.S. Vice President J.D. Vance recently called, “One of the most promising technologies we have seen in generations.”

A breakthrough in AI development

For the first time, generative AI isn’t solely the domain of billion-dollar companies. Instead, entrepreneurial and midsize companies can build tools for patients without having to raise massive amounts of capital.

A new generative AI model, DeepSeek-V3, recently emerged from China, and unlike models built by OpenAI, Google or Anthropic, it wasn’t developed with billions of dollars in funding. Reportedly developed for less than $6 million, DeepSeek used a technique called “knowledge distillation” that allows GenAI applications to learn from existing models faster, cheaper and with greater efficiency.

While DeepSeek is free to use for Americans, there are serious concerns about data retention policies and privacy issues for data stored on servers in China. However, the biggest advance won’t derive from the use of DeepSeek in America but from the rapid advancement of American Open-source AI packages. This means any company, researcher or startup will soon be able to access and refine it to build tools for patients. If RFK Jr. and Makary act quickly, they can unlock AI’s full potential before red tape strangles it.

But where will innovative companies find the biggest opportunity to save lives and make medical care affordable?

America is currently mired in an urgent and worsening crisis of chronic disease, which affects 60% of Americans and drives 70% of healthcare costs. Right now, most chronic diseases are poorly managed. Hypertension, diabetes, and heart failure remain uncontrolled in at least 40% of U.S. cases, leading to millions of avoidable strokes, heart attacks, kidney failures and cancers each year. According to CDC estimates, effective control of these conditions would prevent 30–50% of these life-threatening events.

The future is here and now

Rather than spending hundreds of millions to build large language models from scratch, healthcare startups will be able to create their own generative AI tools at a fraction of the cost. But unlike today’s broad AI applications, which answer general medical questions, this next generation of generative AI will be different. It will be hyper-specialized, trained on massive amounts of existing (but largely unused) patient data to monitor and manage these chronic diseases.

Right now, 97% of hospital bedside monitor data is discarded, never analyzed to improve patient care. Similarly, today’s GenAI models have never been trained on millions of hours of recorded medical call center conversations and chronic disease management check-ins that provide medical advice and offer personalized care recommendations.

Here’s how it could work: For a newly diagnosed patient with diabetes or hypertension, GenAI-enabled wearable monitors would continuously track blood sugar or blood pressure, analyzing fluctuations in real time. And for patients, instead of waiting four months for a routine follow-up visit, the new AI system would identify poor chronic disease control months earlier, provide timely medical advice and flag issues for clinicians when medication adjustments are needed—all for an estimated cost of less than $9 per hour.

For heart failure patients, GenAI-driven monitoring tools would assess daily clinical status, detecting subtle signs of deterioration before a full-blown crisis occurs. Instead of being rushed to the hospital two days later when they can’t breathe, patients and their doctors would receive early alerts, allowing for immediate intervention and eliminating the need for hospitalization.

These disease-specific GenAI agents won’t replace doctors. They’ll fill the gaps between office visits, identify patients at risk, and provide continuous and data-driven care, lowering costs and decreasing daily demands on clinicians.

The FDA must modernize its approach to AI in medicine

Despite AI’s potential to save lives and lower healthcare costs, outdated FDA regulations threaten to stall these innovations before they can reach patients.

The agency has long treated AI like a traditional drug or medical device, demanding information on the data sources and expecting years-long clinical trials. This isn’t how GenAI operates. Unlike pharmaceuticals that keep the same chemical structure, GenAI systems continuously learn and improve—a process driven by the application itself.

RFK Jr. and Makary have a rare opportunity to fix the burdensome regulatory process and lower the barriers to implementation. While Kennedy’s stance on vaccines has drawn criticism, his stated commitment to public health and tackling chronic disease aligns with what GenAI can achieve. Meanwhile, Makary has built a reputation for patient safety and challenging outdated medical policies. He is likely to recognize the value GenAI provides for patients.

A new AI approval framework

Rather than forcing GenAI-driven disease management programs to fit into an antiquated approval model, HHS and the FDA should encourage the development of these programs and fast-track implementation by:

  • Prioritizing GenAI applications that focus on diabetes, hypertension, heart failure and similarly high-impact chronic diseases.
  • Comparing GenAI-driven programs to existing clinician-led models rather than some hypothetically perfect model. When GenAI-powered disease management tools can outperform humans by at least 10% in advice quality, successful disease control and patient satisfaction, they should be given FDA approval.

As U.S. life expectancy remains stagnant, and over half the population can’t afford medical care, the window for action is narrowing. RFK Jr. and Makary must act now to modernize the approach the HHS and FDA take to GenAI. If not, bureaucratic inertia will lock American medicine in the past.


Dr. Robert Pearl is a Stanford University professor, Forbes contributor, bestselling author, and former CEO of The Permanente Medical Group.


Read More

Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less
AI - Its Use, Misuse, and Regulation
Glowing ai chip on a circuit board.
Photo by Immo Wegmann on Unsplash

AI - Its Use, Misuse, and Regulation

There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.

First, the good uses. There is no question that AI can outthink human beings, regardless of how famous or knowledgeable, because of the amount of information it can process in a short amount of time. The most powerful accounts I've read have been in the field of medical research: doctors have fed facts into AI, asking for a diagnosis or a possible remedy, and AI has come up with remarkable answers beyond the human mind's capability.

Keep ReadingShow less