Skip to content
Search

Latest Stories

Top Stories

Medical disinformation is bad for our health and for democracy

Opinion

Executives standing with their right hands raised

Social media CEOs (from right) Mark Zuckerberg (Meta), Linda Yaccarino (X), Shou Chew (TikTok), Evan Spiegel (Snap) and Jason Citron (Discord) are sworn in to the Senate Judiciary Committee on online safety for children.

Tom Williams/CQ-Roll Call, Inc via Getty Images

Mendez is a PhD candidate in population health sciences at the Harvard T.H. Chan School of Public Health and a public voices fellow of The OpEd Project and Academy Health.

In a heated Senate Judiciary Committee hearing Jan. 31, a bipartisan group of lawmakers berated the leaders of Meta, TikTok, Snap, X and Discord about the harms that children have suffered on their platforms — threatening to regulate them out of business and accusing them of killing people.

I want this moment to be a precursor to meaningful policy change. But I’m pessimistic; we’ve been here before. Past congressional hearings on social media have covered a lot of ground, including election interference, extremism and disinformation, national security, and privacy violations. Though the energy behind this latest hearing is encouraging, the track record of inaction from our elected officials is disheartening. We’re doomed to repeat the same harms — only now the harms are supercharged as we enter a new era of artificially generated media.


One bill gaining attention in the wake of this hearing is the Kids Online Safety Act, which would require social media platforms to provide minors the chance to opt out of personalized recommendation systems and a mechanism to completely delete their personal data. Congress must aim higher than simply shielding people from these practices until they’re 18. Our elected officials must be willing to follow through on the bold assertions they’ve raised on the national stage. Are they actually willing to regulate Meta and X out of business? Are they actually willing to act like people’s lives are on the line?

If that sounds extreme, I invite you to reflect on the past few years. In 2020, hydroxychloroquine was unscientifically promoted as a Covid-19 treatment on social media, contributing to hundreds of deaths in May and June 2020 alone. Between May 2021 and September 2022, 232,000 lives could have been saved in the United States via uptake of Covid-19 vaccines, too many people succumbed to the spread of false information on social media. In August 2022, Boston Children’s Hospital faced a wave of harassment and bomb threats following a social media smear campaign. Surely protecting children from the harms of social media includes addressing the harms of medical disinformation that leads to death and violence.

As a public health researcher, I’m attuned to prominent medical disinformation. But the harms of its spread go beyond physical health, threatening the wellbeing of our democracy. Anti-science is now a viable political platform that distracts from the needs of politically marginalized groups. Debunked Covid-19 conspiracy theories took center stage in a House of Representatives hearing last summer that sought to cast doubt on leading virologists’ research practices. Rehashing these conspiracy theories does nothing to address the long-term impacts of the Covid-19 pandemic, including the economic costs of long Covid and the higher Covid-19 death rates in rural and BIPOC communities. The mainstreaming of anti-vaccine movements in U.S politics threatens to exacerbate current disparities in other viral illnesses, such as increased flu hospitalizations in high poverty census tracts.

While medical disinformation fuels political distractions, it also overlaps with voter suppression. This means that the communities experiencing the downstream negative impacts also have less of a voice in holding elected officials accountable. Many rural voters rely on early voting, mail-in ballots and same-day registration, which have all come under attack in recent years. Stricter voter ID laws disproportionately impact communities of color. This is on top of a baseline relationship between poor health and low voter turnout.

As such, maybe it shouldn’t come as a surprise that this latest social media hearing does not promise a shift in the current balance of social media profit over care. Potential voters most impacted by these issues already have less of a voice in electoral politics. Thus these interconnected issues seem likely to balloon over the coming years, as artificial intelligence tools promise to flood our social networks with an even more unfathomable scale of content hypercharged for algorithmic discoverability. We are entering an era of robots talking to robots, with us humans experiencing the collateral damage for the sake of ad sales.

There are troubling echoes in the recent rise of a ChatGPT app ecosystem, reminiscent of the central issues of social media companies. One ChatGPT plugin offers local health risk updates for respiratory illnesses in the United States. Another helps users search for clinical trials, while another offers help to understand their eligibility criteria. Still others offer more general medical information or more personalized nutrition insights. Never mind that we don’t know the sources of data driving their responses, or why they might include some pieces of information over others. Or that we have no idea how the information they give us might be tailored based on our chat history and language choices. It’s not enough that the ChatGPT prompt window warns, “ChatGPT can make mistakes. Consider checking important information.”

But tech leaders want to have their cake and eat it too, and our elected officials seem fine with this status quo. Social media and artificial intelligence are framed as transformative tools that can improve our lives and bring people together through sharing information. And yet tech companies have no responsibility for the information people encounter on them, as if all the human decisions that go into platform design, data science and content moderation don’t matter. It’s not enough that social media companies occasionally put disclaimers on content.

Tech companies are changing the world, yet we’re supposed to believe that they are powerless to intervene in it. We are supposed to believe that we, as individuals, have the ultimate responsibility for the harms of billion-dollar companies.

It’s only a matter of time before we see a new flood of influencers, human and artificial, pushing out content at an even faster rate with the help of AI-generated scripts and visuals. A narrow focus on shielding children from these products won’t be enough to protect them from the harms of extreme content and disinformation. It won’t be enough to protect the adults in their lives from the intersecting issues of medical disinformation, political disinformation and voter suppression.

As multiple congressional hearings have reminded us, the underlying design and profit motives of social media companies are already costing lives and getting in the way of civil discourse. They are already leading to bullying, extremism and mass disinformation. They are already disrupting elections. We need and deserve a sweeping policy change around social media and AI, with an intensity and breadth that match the emotional intensity of this latest hearing. We deserve more than the theater of soundbites and public scolding.

Read More

Person on a smartphone.

The digital public square rewards outrage over empathy. To save democracy, we must redesign our online spaces to prioritize dialogue, trust, and civility.

Getty Images, Tiwaporn Khemwatcharalerd

Rebuilding Civic Trust in the Age of Algorithmic Division

A headline about a new education policy flashes across a news-aggregation app. Within minutes, the comment section fills: one reader suggests the proposal has merit; a dozen others pounce. Words like idiot, sheep, and propaganda fly faster than the article loads. No one asks what the commenter meant. The thread scrolls on—another small fire in a forest already smoldering.

It’s a small scene, but it captures something larger: how the public square has turned reactive by design. The digital environments where citizens now meet were built to reward intensity, not inquiry. Each click, share, and outrage serves an invisible metric that prizes attention over understanding.

Keep ReadingShow less
A woman typing on her laptop.

Pop-ups on federal websites blaming Democrats for the shutdown spark Hatch Act concerns, raising questions about neutrality in government communications.

Getty Images, Igor Suka

When Federal Websites Get Political: The Hatch Act in the Digital Age

As the federal government entered a shutdown on October 1st, a new controversy emerged over how federal agencies communicate during political standoffs. Pop-ups and banners appeared on agency websites blaming one side of Congress for the funding lapse, prompting questions about whether such messaging violated federal rules meant to keep government communications neutral. The episode has drawn bipartisan concern and renewed scrutiny of the Hatch Act, a 1939 law that governs political activity in federal workplaces.

The Shutdown and Federal Website Pop-ups

The government shutdown began after negotiations over the federal budget collapsed. Republicans, who control both chambers of Congress, needed Democratic support in the Senate to pass a series of funding bills, or Continuing Resolutions, but failed to reach an agreement before the deadline. In the hours before the shutdown took effect, the Department of Housing and Urban Development, or HUD, posted a full-screen red banner stating, “The Radical Left in Congress shut down the government. HUD will use available resources to help Americans in need.” Users could not access the website until clicking through the message.

Keep ReadingShow less
Congress Must Lead On AI While It Still Can
a computer chip with the letter a on top of it
Photo by Igor Omilaev on Unsplash

Congress Must Lead On AI While It Still Can

Last month, Matthew and Maria Raine testified before Congress, describing how their 16-year-old son confided suicidal thoughts to AI chatbots, only to be met with validation, encouragement, and even help drafting a suicide note. The Raines are among multiple families who have recently filed lawsuits alleging that AI chatbots were responsible for their children’s suicides. Their deaths, now at the center of lawsuits against AI companies, underscore a similar argument playing out in federal courts: artificial intelligence is no longer an abstraction of the future; it is already shaping life and death.

And these teens are not outliers. According to Common Sense Media, a nonprofit dedicated to improving the lives of kids and families, 72 percent of teenagers report using AI companions, often relying on them for emotional support. This dependence is developing far ahead of any emerging national safety standard.

Keep ReadingShow less
A person on using a smartphone.

With millions of child abuse images reported annually and AI creating new dangers, advocates are calling for accountability from Big Tech and stronger laws to keep kids safe online.

Getty Images, ljubaphoto

Parents: It’s Time To Get Mad About Online Child Sexual Abuse

Forty-five years ago this month, Mothers Against Drunk Driving had its first national press conference, and a global movement to stop impaired driving was born. MADD was founded by Candace Lightner after her 13-year-old daughter was struck and killed by a drunk driver while walking to a church carnival in 1980. Terms like “designated driver” and the slogan “Friends don’t let friends drive drunk” came out of MADD’s campaigning, and a variety of state and federal laws, like a lowered blood alcohol limit and legal drinking age, were instituted thanks to their advocacy. Over time, social norms evolved, and driving drunk was no longer seen as a “folk crime,” but a serious, conscious choice with serious consequences.

Movements like this one, started by fed-up, grieving parents working with law enforcement and law makers, worked to lower road fatalities nationwide, inspire similar campaigns in other countries, and saved countless lives.

Keep ReadingShow less