Skip to content
Search

Latest Stories

Top Stories

Amid Trump’s War on LGBTQ+ Teens, Social Media Platforms Must Step Up

Opinion

Amid Trump’s War on LGBTQ+ Teens, Social Media Platforms Must Step Up
rainbow drawing
Photo by Alex Jackman on Unsplash

With Trump’s war on inclusion, life has suddenly become even more dangerous for LGBTQ youth. The CDC has removed health information for LGBTQ+ people from its website—including information about creating safe, supportive spaces. Meanwhile, Trump’s executive order, couched in hateful and inaccurate language, has stopped gender-affirming care.

Sadly, Meta’s decision in January to end fact-checking threatens to make social media even less safe for vulnerable teens. To stop the spread of misinformation, Meta and other social media platforms must commit to protecting young users.


Just a few months ago, Meta appeared to be taking a step in the right direction, launching its Teen Accounts with promises of safer online spaces. But the company’s recent decision to end fact-checking on its platforms threatens to undo all that progress—especially for teens who are already vulnerable. Among the most at risk are LGBTQ+ young people, whose safety and well-being are further endangered when harmful misinformation goes unchecked.

Adolescence is a time of self-discovery, and for many young people, that means exploring questions about their sexual identity. Imagine a teen scrolling through their social media feed—curious to learn more about interpersonal relationships and sexual identity—searching the internet to answer any questions that they may have in a place that they perceive as safer than their home or school. But that space is anything but safe now when untrue statements like “LGBTQ+ is a mental illness” spread unchecked.

These scientifically debunked statements aren’t just factual errors easily correctible by other online users—they are direct assaults on teens’ sense of self, as well as their mental health and well-being. Studies show that victimization, including anti-LGBTQ+ harassment, strongly predicts self-harm and suicidal thoughts and behaviors among LGBTQ+ young people. Young people may internalize these harmful ideas, leading to confusion, shame, or even mental health struggles like anxiety, depression, or suicide ideation. This false narrative not only stigmatizes LGBTQ+ young people and impacts their mental health but also creates an environment where young people may feel compelled to hide their identities or potentially seek harmful treatments unsupported by evidence. Adults, including those who run tech companies, are responsible for creating safe and positive online experiences for young people.

We already have experts working on this issue, too. For example, the American Academy of Pediatrics—our country’s leading group of children’s doctors—studies healthy social media use through its Center of Excellence on Social Media and Youth Mental Health. Its co-directors, Dr. Megan Moreno and Dr. Jenny Radesky, specifically recommend platform policies that prevent the spread of untrustworthy and hateful content and more user control over settings, which are often buried.

At first, Meta seemed to be listening, instituting Teen Accounts with built-in features such as a sleep mode and limits on sensitive content. Even better, they planned to improve these features and include young people in the process. However, removing fact-checking on their platform undermines these efforts, increasing teens’ exposure to inaccurate, misleading, and/or harmful information. This contradiction sends a troubling message: while Meta claims to prioritize the safety and well-being of young users, it simultaneously dismantles one of the key mechanisms ensuring information integrity.

To be sure, Mark Zuckerberg framed his decision as a defense of “free expression” and a move away from “too much censorship.” On the surface, this sounds like something teens would wholeheartedly embrace. But in fact, the elimination of fact-checking, and the dismantling of safeguards for young users directly contradict what teens themselves deserve and desire. Young people, among the most active users of social media, consistently express a desire for safer online spaces. According to the Pew Research Center, the majority of teens prioritize feeling safe over being able to speak their minds freely; they also want enhanced safety features and content moderation. Both freedom of expression and enhanced safety features are crucial, but ensuring a safe and supportive online environment is essential to protecting teens’ well-being while fostering open dialogue.

When even teens call for more safeguards, adults—including those who run social media companies—have a moral obligation to respond. If Zuckerberg decides to scrap safeguards in fact-checking in favor of “Community Notes,” we must ensure that “Community Notes” strategies are evidence-based, expert-informed, youth-centered, and community-driven. According to research, social media companies must prioritize the following three approaches to ensure young people’s safety online:

Partnering with LGBTQ+ and other advocacy groups from marginalized communities to ensure that information shared is truthful, accurate, and rooted in the lived experiences of marginalized communities. For example, GLAAD recently released a report detailing harmful content on Meta’s platform, including the use of violent language toward LGBTQ+ individuals and the use of severe anti-trans slurs, among many others. This report prompted them to pen a letter with specific calls to action on addressing misinformation. The recommendations are there. Work with them.

Investing in youth-centered approaches. As an example, researchers at the MIT Media Lab launched Scratch (i.e., an online community for children that teaches them coding and computer science) in 2007. They implemented a governance strategy to moderate content proactively and reactively. Through youth-centered Community Guidelines and adult moderator s, they address hate speech and remove it immediately. Appropriately trained moderators serve as essential gatekeepers, ensuring that platforms remain spaces for healthy dialogue rather than havens for toxicity for young people.

Linking young people to evidence-based, culturally informed mental health resources at every opportunity. Young people are eager for online support (e.g., online therapy, apps, and social media) to manage their mental health, and they deserve access to accurate, safe, and affirming information—free from misinformation, exploitation, and harmful bias. Ensuring LGBTQ+ young people have access to mental health resources, especially to intervene early, is critical.

Zuckerberg framed the end of fact-checking as protecting free speech. Instead, he’s protecting hate speech and misinformation at the cost of young people’s wellbeing—the very thing Teen Accounts were meant to safeguard. If Zuckerberg is sincere about improving Meta’s products for young people, then Teen Accounts must be accountable—to the truth.

Claudia-Santi F. Fernandes, Ed.D., is an assistant clinical professor at the Yale Child Study Center. She is a public voices fellow of The OpEd Project.


Read More

Congress Must Lead On AI While It Still Can
a computer chip with the letter a on top of it
Photo by Igor Omilaev on Unsplash

Congress Must Lead On AI While It Still Can

Last month, Matthew and Maria Raine testified before Congress, describing how their 16-year-old son confided suicidal thoughts to AI chatbots, only to be met with validation, encouragement, and even help drafting a suicide note. The Raines are among multiple families who have recently filed lawsuits alleging that AI chatbots were responsible for their children’s suicides. Their deaths, now at the center of lawsuits against AI companies, underscore a similar argument playing out in federal courts: artificial intelligence is no longer an abstraction of the future; it is already shaping life and death.

And these teens are not outliers. According to Common Sense Media, a nonprofit dedicated to improving the lives of kids and families, 72 percent of teenagers report using AI companions, often relying on them for emotional support. This dependence is developing far ahead of any emerging national safety standard.

Keep ReadingShow less
A person on using a smartphone.

With millions of child abuse images reported annually and AI creating new dangers, advocates are calling for accountability from Big Tech and stronger laws to keep kids safe online.

Getty Images, ljubaphoto

Parents: It’s Time To Get Mad About Online Child Sexual Abuse

Forty-five years ago this month, Mothers Against Drunk Driving had its first national press conference, and a global movement to stop impaired driving was born. MADD was founded by Candace Lightner after her 13-year-old daughter was struck and killed by a drunk driver while walking to a church carnival in 1980. Terms like “designated driver” and the slogan “Friends don’t let friends drive drunk” came out of MADD’s campaigning, and a variety of state and federal laws, like a lowered blood alcohol limit and legal drinking age, were instituted thanks to their advocacy. Over time, social norms evolved, and driving drunk was no longer seen as a “folk crime,” but a serious, conscious choice with serious consequences.

Movements like this one, started by fed-up, grieving parents working with law enforcement and law makers, worked to lower road fatalities nationwide, inspire similar campaigns in other countries, and saved countless lives.

Keep ReadingShow less
King, Pope, Jedi, Superman: Trump’s Social Media Images Exclusively Target His Base and Try To Blur Political Reality

Two Instagram images put out by the White House.

White House Instagram

King, Pope, Jedi, Superman: Trump’s Social Media Images Exclusively Target His Base and Try To Blur Political Reality

A grim-faced President Donald J. Trump looks out at the reader, under the headline “LAW AND ORDER.” Graffiti pictured in the corner of the White House Facebook post reads “Death to ICE.” Beneath that, a photo of protesters, choking on tear gas. And underneath it all, a smaller headline: “President Trump Deploys 2,000 National Guard After ICE Agents Attacked, No Mercy for Lawless Riots and Looters.”

The official communication from the White House appeared on Facebook in June 2025, after Trump sent in troops to quell protests against Immigration and Customs Enforcement agents in Los Angeles. Visually, it is melodramatic, almost campy, resembling a TV promotion.

Keep ReadingShow less
When the Lights Go Out — and When They Never Do
a person standing in a doorway with a light coming through it

When the Lights Go Out — and When They Never Do

The massive outage that crippled Amazon Web Services this past October 20th sent shockwaves through the digital world. Overnight, the invisible backbone of our online lives buckled: Websites went dark, apps froze, transactions stalled, and billions of dollars in productivity and trust evaporated. For a few hours, the modern economy’s nervous system failed. And in that silence, something was revealed — how utterly dependent we have become on a single corporate infrastructure to keep our civilization’s pulse steady.

When Amazon sneezes, the world catches a fever. That is not a mark of efficiency or innovation. It is evidence of recklessness. For years, business leaders have mocked antitrust reformers like FTC Chair Lina Khan, dismissing warnings about the dangers of monopoly concentration as outdated paranoia. But the AWS outage was not a cyberattack or an act of God — it was simply the predictable outcome of a world that has traded resilience for convenience, diversity for cost-cutting, and independence for “efficiency.” Executives who proudly tout their “risk management frameworks” now find themselves helpless before a single vendor’s internal failure.

Keep ReadingShow less