Skip to content
Search

Latest Stories

Top Stories

Medical disinformation is bad for our health and for democracy

Executives standing with their right hands raised

Social media CEOs (from right) Mark Zuckerberg (Meta), Linda Yaccarino (X), Shou Chew (TikTok), Evan Spiegel (Snap) and Jason Citron (Discord) are sworn in to the Senate Judiciary Committee on online safety for children.

Tom Williams/CQ-Roll Call, Inc via Getty Images

Mendez is a PhD candidate in population health sciences at the Harvard T.H. Chan School of Public Health and a public voices fellow of The OpEd Project and Academy Health.

In a heated Senate Judiciary Committee hearing Jan. 31, a bipartisan group of lawmakers berated the leaders of Meta, TikTok, Snap, X and Discord about the harms that children have suffered on their platforms — threatening to regulate them out of business and accusing them of killing people.

I want this moment to be a precursor to meaningful policy change. But I’m pessimistic; we’ve been here before. Past congressional hearings on social media have covered a lot of ground, including election interference, extremism and disinformation, national security, and privacy violations. Though the energy behind this latest hearing is encouraging, the track record of inaction from our elected officials is disheartening. We’re doomed to repeat the same harms — only now the harms are supercharged as we enter a new era of artificially generated media.


One bill gaining attention in the wake of this hearing is the Kids Online Safety Act, which would require social media platforms to provide minors the chance to opt out of personalized recommendation systems and a mechanism to completely delete their personal data. Congress must aim higher than simply shielding people from these practices until they’re 18. Our elected officials must be willing to follow through on the bold assertions they’ve raised on the national stage. Are they actually willing to regulate Meta and X out of business? Are they actually willing to act like people’s lives are on the line?

If that sounds extreme, I invite you to reflect on the past few years. In 2020, hydroxychloroquine was unscientifically promoted as a Covid-19 treatment on social media, contributing to hundreds of deaths in May and June 2020 alone. Between May 2021 and September 2022, 232,000 lives could have been saved in the United States via uptake of Covid-19 vaccines, too many people succumbed to the spread of false information on social media. In August 2022, Boston Children’s Hospital faced a wave of harassment and bomb threats following a social media smear campaign. Surely protecting children from the harms of social media includes addressing the harms of medical disinformation that leads to death and violence.

As a public health researcher, I’m attuned to prominent medical disinformation. But the harms of its spread go beyond physical health, threatening the wellbeing of our democracy. Anti-science is now a viable political platform that distracts from the needs of politically marginalized groups. Debunked Covid-19 conspiracy theories took center stage in a House of Representatives hearing last summer that sought to cast doubt on leading virologists’ research practices. Rehashing these conspiracy theories does nothing to address the long-term impacts of the Covid-19 pandemic, including the economic costs of long Covid and the higher Covid-19 death rates in rural and BIPOC communities. The mainstreaming of anti-vaccine movements in U.S politics threatens to exacerbate current disparities in other viral illnesses, such as increased flu hospitalizations in high poverty census tracts.

While medical disinformation fuels political distractions, it also overlaps with voter suppression. This means that the communities experiencing the downstream negative impacts also have less of a voice in holding elected officials accountable. Many rural voters rely on early voting, mail-in ballots and same-day registration, which have all come under attack in recent years. Stricter voter ID laws disproportionately impact communities of color. This is on top of a baseline relationship between poor health and low voter turnout.

As such, maybe it shouldn’t come as a surprise that this latest social media hearing does not promise a shift in the current balance of social media profit over care. Potential voters most impacted by these issues already have less of a voice in electoral politics. Thus these interconnected issues seem likely to balloon over the coming years, as artificial intelligence tools promise to flood our social networks with an even more unfathomable scale of content hypercharged for algorithmic discoverability. We are entering an era of robots talking to robots, with us humans experiencing the collateral damage for the sake of ad sales.

There are troubling echoes in the recent rise of a ChatGPT app ecosystem, reminiscent of the central issues of social media companies. One ChatGPT plugin offers local health risk updates for respiratory illnesses in the United States. Another helps users search for clinical trials, while another offers help to understand their eligibility criteria. Still others offer more general medical information or more personalized nutrition insights. Never mind that we don’t know the sources of data driving their responses, or why they might include some pieces of information over others. Or that we have no idea how the information they give us might be tailored based on our chat history and language choices. It’s not enough that the ChatGPT prompt window warns, “ChatGPT can make mistakes. Consider checking important information.”

But tech leaders want to have their cake and eat it too, and our elected officials seem fine with this status quo. Social media and artificial intelligence are framed as transformative tools that can improve our lives and bring people together through sharing information. And yet tech companies have no responsibility for the information people encounter on them, as if all the human decisions that go into platform design, data science and content moderation don’t matter. It’s not enough that social media companies occasionally put disclaimers on content.

Tech companies are changing the world, yet we’re supposed to believe that they are powerless to intervene in it. We are supposed to believe that we, as individuals, have the ultimate responsibility for the harms of billion-dollar companies.

It’s only a matter of time before we see a new flood of influencers, human and artificial, pushing out content at an even faster rate with the help of AI-generated scripts and visuals. A narrow focus on shielding children from these products won’t be enough to protect them from the harms of extreme content and disinformation. It won’t be enough to protect the adults in their lives from the intersecting issues of medical disinformation, political disinformation and voter suppression.

As multiple congressional hearings have reminded us, the underlying design and profit motives of social media companies are already costing lives and getting in the way of civil discourse. They are already leading to bullying, extremism and mass disinformation. They are already disrupting elections. We need and deserve a sweeping policy change around social media and AI, with an intensity and breadth that match the emotional intensity of this latest hearing. We deserve more than the theater of soundbites and public scolding.

Read More

Entertainment Can Improve How Democrats and Republicans See Each Other

Since the development of American mass media culture in the mid-20th century, numerous examples of entertainment media have tried to improve attitudes towards those who have traditionally held little power.

Getty Images, skynesher

Entertainment Can Improve How Democrats and Republicans See Each Other

Entertainment has been used for decades to improve attitudes toward other groups, both in the U.S. and abroad. One can think of movies like Guess Who's Coming to Dinner, helping change attitudes toward Black Americans, or TV shows like Rosanne, helping humanize the White working class. Efforts internationally show that media can sometimes improve attitudes toward two groups concurrently.

Substantial research shows that Americans now hold overly negative views of those across the political spectrum. Let's now learn from decades of experience using entertainment to improve attitudes of those in other groups—but also from counter-examples that have reinforced stereotypes and whose techniques should generally be avoided—in order to improve attitudes toward fellow Americans across politics. This entertainment can allow Americans across the political spectrum to have more accurate views of each other while realizing that successful cross-ideological friendships and collaborations are possible.

Keep ReadingShow less
Congress Must Not Undermine State Efforts To Regulate AI Harms to Children
Congress Must Not Undermine State Efforts To Regulate AI Harms to Children
Getty Images, Dmytro Betsenko

Congress Must Not Undermine State Efforts To Regulate AI Harms to Children

A cornerstone of conservative philosophy is that policy decisions should generally be left to the states. Apparently, this does not apply when the topic is artificial intelligence (AI).

In the name of promoting innovation, and at the urging of the tech industry, Congress quietly included in a 1,000-page bill a single sentence that has the power to undermine efforts to protect against the dangers of unfettered AI development. The sentence imposes a ten-year ban on state regulation of AI, including prohibiting the enforcement of laws already on the books. This brazen approach crossed the line even for conservative U.S. Representative Marjorie Taylor Greene, who remarked, “We have no idea what AI will be capable of in the next 10 years, and giving it free rein and tying states' hands is potentially dangerous.” She’s right. And it is especially dangerous for children.

Keep ReadingShow less
Microphones, podcast set up, podcast studio.

Many people inside and outside of the podcasting world are working to use the medium as a way to promote democracy and civic engagement.

Getty Images, Sergey Mironov

Ben Rhodes on How Podcasts Can Strengthen Democracy

After the 2024 election was deemed the “podcast election,” many people inside and outside of the podcasting world were left wondering how to capitalize on the medium as a way to promote democracy and civic engagement to audiences who are either burned out by or distrustful of traditional or mainstream news sources.

The Democracy Group podcast network has been working through this question since its founding in 2020—long before presidential candidates appeared on some of the most popular podcasts to appeal to specific demographics. Our members recently met in Washington, D.C., for our first convening to learn from each other and from high-profile podcasters like Jessica Tarlov, host of Raging Moderates, and Ben Rhodes, host of Pod Save the World.

Keep ReadingShow less
True Confessions of an AI Flip Flopper
Ai technology, Artificial Intelligence. man using technology smart robot AI, artificial intelligence by enter command prompt for generates something, Futuristic technology transformation.
Getty Images - stock photo

True Confessions of an AI Flip Flopper

A few years ago, I would have agreed with the argument that the most important AI regulatory issue is mitigating the low probability of catastrophic risks. Today, I’d think nearly the opposite. My primary concern is that we will fail to realize the already feasible and significant benefits of AI. What changed and why do I think my own evolution matters?

Discussion of my personal path from a more “safety” oriented perspective to one that some would label as an “accelerationist” view isn’t important because I, Kevin Frazier, have altered my views. The point of walking through my pivot is instead valuable because it may help those unsure of how to think about these critical issues navigate a complex and, increasingly, heated debate. By sharing my own change in thought, I hope others will feel welcomed to do two things: first, reject unproductive, static labels that are misaligned with a dynamic technology; and, second, adjust their own views in light of the wide variety of shifting variables at play when it comes to AI regulation. More generally, I believe that calling myself out for a so-called “flip-flop” may give others more leeway to do so without feeling like they’ve committed some wrong.

Keep ReadingShow less