Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Amid Trump’s War on LGBTQ+ Teens, Social Media Platforms Must Step Up

Opinion

Amid Trump’s War on LGBTQ+ Teens, Social Media Platforms Must Step Up
rainbow drawing
Photo by Alex Jackman on Unsplash

With Trump’s war on inclusion, life has suddenly become even more dangerous for LGBTQ youth. The CDC has removed health information for LGBTQ+ people from its website—including information about creating safe, supportive spaces. Meanwhile, Trump’s executive order, couched in hateful and inaccurate language, has stopped gender-affirming care.

Sadly, Meta’s decision in January to end fact-checking threatens to make social media even less safe for vulnerable teens. To stop the spread of misinformation, Meta and other social media platforms must commit to protecting young users.


Just a few months ago, Meta appeared to be taking a step in the right direction, launching its Teen Accounts with promises of safer online spaces. But the company’s recent decision to end fact-checking on its platforms threatens to undo all that progress—especially for teens who are already vulnerable. Among the most at risk are LGBTQ+ young people, whose safety and well-being are further endangered when harmful misinformation goes unchecked.

Adolescence is a time of self-discovery, and for many young people, that means exploring questions about their sexual identity. Imagine a teen scrolling through their social media feed—curious to learn more about interpersonal relationships and sexual identity—searching the internet to answer any questions that they may have in a place that they perceive as safer than their home or school. But that space is anything but safe now when untrue statements like “LGBTQ+ is a mental illness” spread unchecked.

These scientifically debunked statements aren’t just factual errors easily correctible by other online users—they are direct assaults on teens’ sense of self, as well as their mental health and well-being. Studies show that victimization, including anti-LGBTQ+ harassment, strongly predicts self-harm and suicidal thoughts and behaviors among LGBTQ+ young people. Young people may internalize these harmful ideas, leading to confusion, shame, or even mental health struggles like anxiety, depression, or suicide ideation. This false narrative not only stigmatizes LGBTQ+ young people and impacts their mental health but also creates an environment where young people may feel compelled to hide their identities or potentially seek harmful treatments unsupported by evidence. Adults, including those who run tech companies, are responsible for creating safe and positive online experiences for young people.

We already have experts working on this issue, too. For example, the American Academy of Pediatrics—our country’s leading group of children’s doctors—studies healthy social media use through its Center of Excellence on Social Media and Youth Mental Health. Its co-directors, Dr. Megan Moreno and Dr. Jenny Radesky, specifically recommend platform policies that prevent the spread of untrustworthy and hateful content and more user control over settings, which are often buried.

At first, Meta seemed to be listening, instituting Teen Accounts with built-in features such as a sleep mode and limits on sensitive content. Even better, they planned to improve these features and include young people in the process. However, removing fact-checking on their platform undermines these efforts, increasing teens’ exposure to inaccurate, misleading, and/or harmful information. This contradiction sends a troubling message: while Meta claims to prioritize the safety and well-being of young users, it simultaneously dismantles one of the key mechanisms ensuring information integrity.

To be sure, Mark Zuckerberg framed his decision as a defense of “free expression” and a move away from “too much censorship.” On the surface, this sounds like something teens would wholeheartedly embrace. But in fact, the elimination of fact-checking, and the dismantling of safeguards for young users directly contradict what teens themselves deserve and desire. Young people, among the most active users of social media, consistently express a desire for safer online spaces. According to the Pew Research Center, the majority of teens prioritize feeling safe over being able to speak their minds freely; they also want enhanced safety features and content moderation. Both freedom of expression and enhanced safety features are crucial, but ensuring a safe and supportive online environment is essential to protecting teens’ well-being while fostering open dialogue.

When even teens call for more safeguards, adults—including those who run social media companies—have a moral obligation to respond. If Zuckerberg decides to scrap safeguards in fact-checking in favor of “Community Notes,” we must ensure that “Community Notes” strategies are evidence-based, expert-informed, youth-centered, and community-driven. According to research, social media companies must prioritize the following three approaches to ensure young people’s safety online:

Partnering with LGBTQ+ and other advocacy groups from marginalized communities to ensure that information shared is truthful, accurate, and rooted in the lived experiences of marginalized communities. For example, GLAAD recently released a report detailing harmful content on Meta’s platform, including the use of violent language toward LGBTQ+ individuals and the use of severe anti-trans slurs, among many others. This report prompted them to pen a letter with specific calls to action on addressing misinformation. The recommendations are there. Work with them.

Investing in youth-centered approaches. As an example, researchers at the MIT Media Lab launched Scratch (i.e., an online community for children that teaches them coding and computer science) in 2007. They implemented a governance strategy to moderate content proactively and reactively. Through youth-centered Community Guidelines and adult moderator s, they address hate speech and remove it immediately. Appropriately trained moderators serve as essential gatekeepers, ensuring that platforms remain spaces for healthy dialogue rather than havens for toxicity for young people.

Linking young people to evidence-based, culturally informed mental health resources at every opportunity. Young people are eager for online support (e.g., online therapy, apps, and social media) to manage their mental health, and they deserve access to accurate, safe, and affirming information—free from misinformation, exploitation, and harmful bias. Ensuring LGBTQ+ young people have access to mental health resources, especially to intervene early, is critical.

Zuckerberg framed the end of fact-checking as protecting free speech. Instead, he’s protecting hate speech and misinformation at the cost of young people’s wellbeing—the very thing Teen Accounts were meant to safeguard. If Zuckerberg is sincere about improving Meta’s products for young people, then Teen Accounts must be accountable—to the truth.

Claudia-Santi F. Fernandes, Ed.D., is an assistant clinical professor at the Yale Child Study Center. She is a public voices fellow of The OpEd Project.



Read More

Two people looking at screens.

A case for optimism, risk-taking, and policy experimentation in the age of AI—and why pessimism threatens technological progress.

Getty Images, Andriy Onufriyenko

In Defense of AI Optimism

Society needs people to take risks. Entrepreneurs who bet on themselves create new jobs. Institutions that gamble with new processes find out best to integrate advances into modern life. Regulators who accept potential backlash by launching policy experiments give us a chance to devise laws that are based on evidence, not fear.

The need for risk taking is all the more important when society is presented with new technologies. When new tech arrives on the scene, defense of the status quo is the easier path--individually, institutionally, and societally. We are all predisposed to think that the calamities, ailments, and flaws we experience today--as bad as they may be--are preferable to the unknowns tied to tomorrow.

Keep ReadingShow less
Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less
Meta Undermining Trust but Verify through Paid Links
Facebook launches voting resource tool
Facebook launches voting resource tool

Meta Undermining Trust but Verify through Paid Links

Facebook is testing limits on shared external links, which would become a paid feature through their Meta Verified program, which costs $14.99 per month.

This change solidifies that verification badges are now meaningless signifiers. Yet it wasn’t always so; the verified internet was built to support participation and trust. Beginning with Twitter’s verification program launched in 2009, a checkmark next to a username indicated that an account had been verified to represent a notable person or official account for a business. We could believe that an elected official or a brand name was who they said they were online. When Twitter Blue, and later X Premium, began to support paid blue checkmarks in November of 2022, the visual identification of verification became deceptive. Think Fake Eli Lilly accounts posting about free insulin and impersonation accounts for Elon Musk himself.

This week’s move by Meta echoes changes at Twitter/X, despite the significant evidence that it leaves information quality and user experience in a worse place than before. Despite what Facebook says, all this tells anyone is that you paid.

Keep ReadingShow less