Skip to content
Search

Latest Stories

Top Stories

Elon Musk’s bid to protect free speech threatens another First Amendment right

Opinion

Elon Musk, First Amendment
STR/NurPhoto via Getty Images

Kohli is the advocacy associate at Interfaith Alliance, a national organization dedicated to protecting the integrity of both religion and democracy in the United States.

For better or for worse, I grew up with social media. I remember jumping on Google+ after school and continuing conversations with friends. I would use Facebook groups to organize class gifts for favorite teachers. Snapchat was for sending funny pictures and maintaining streaks as a way to qualify friendships. Of course, social media platforms are so much more than places to make silly posts for your friends. Platforms are part of a complicated information ecosystem, in which some parts thrive as healthy forums, and other parts spread lies and misinformation, allowing hate and harassment to thrive on and off line.

I also grew up with Indian-American, immigrant parents. My grandparents tell me the beauty of this country is that people of so many backgrounds and cultures can come here to coexist. But social media, for all its strengths and potential, gives an outsized voice to people who want to spread hate and sow division. When I scroll through the depths of Twitter or accidentally click on a YouTube video that sends my recommendations down a rabbit hole of extremism, it’s clear to me that each of us is constantly in danger of being pushed into echo chambers of hate.

Elon Musk’s plans for Twitter threaten the very coexistence my grandparents celebrated. Musk’s fumbling leadership of the influential social media platform risks the democratic promise of our country. If hate is allowed to run rampant and millions of users feel unsafe, we are failing to live up to that promise.


Musk has positioned himself as a champion of the First Amendment right to freedom of speech. This framing could not be further from the truth. Instead, he has allowed extremists and people with hateful ideologies to expand their reach on Twitter. Musk fired staff in charge of dealing with hateful content on the platform, leaving the company too short-staffed to handle the increase in harmful posts. In the 12-hour period after Musk’s ownership of Twitter was finalized, the use of derogatory language toward Black people increased almost 500 percent.

While concerns about Musk’s damaging impact on free speech have been well-documented, the risks to another fundamental right have been overlooked: freedom of religion, which recognizes the right for people of all faiths or none to practice what they believe. Social media is so intertwined with our lives offline that threats to religious freedom are no longer confined to the physical world. Every day that hate is allowed to run rampant and target communities online, the freedom to believe as we choose erodes.

In recent years, harmful content on social media has manifested in physical acts of violence targeting vulnerable communities. A 2021 report from the Anti-Defamation League exposed the harmful effects of online hate on different communities, from an increase in violence against Asian Americans, to antisemitic harassment directed at Jewish members of Congress, to the quadrupling of hateful Facebook posts against African Americans after the murder of George Floyd.

There are too many examples of real-world violence committed by young social media users who encountered increasingly extremist content online. The perpetrator of the devastating attack at a supermarket in a predominantly Black neighborhood in Buffalo, N.Y., streamed the massacre on Twitch. The shooter wrote a manifesto on Google Docs filled with white supremacist ideology, stating that he was radicalized on 4chan in 2020. The Twitch livestream was taken down in just two minutes, but the video remained on Facebook for over 10 hours, allowing 46,000 people to share it. His actions, and the failure of platforms to identify and take down content like this immediately, created further extremist material for other users to view.

For better or for worse, social media is the most accessible way for people to connect online. Our government has an obligation to protect people of all backgrounds and identities. As backlash against content moderation comes to a head on Twitter, there’s no telling how other platforms might adjust their policies in the future. The national conversation around what’s happening with Twitter is laser-focused on the whims of a CEO who doesn’t seem to understand what he wants. All the while, real people and communities are being hurt.

While people like Musk play games with the ever-growing universe we’ve created online, our government must devote real time and resources to taming the giant that is the tech industry. Without regulation, and while people from different faiths, backgrounds and identities are harassed on and off social media, this country fails to be a safe haven for the people who need it most. Big Tech and its social media platforms are only getting started – we must ensure that this industry’s progress does not come at the cost of our most sacred freedoms.

Read More

Person on a smartphone.

The digital public square rewards outrage over empathy. To save democracy, we must redesign our online spaces to prioritize dialogue, trust, and civility.

Getty Images, Tiwaporn Khemwatcharalerd

Rebuilding Civic Trust in the Age of Algorithmic Division

A headline about a new education policy flashes across a news-aggregation app. Within minutes, the comment section fills: one reader suggests the proposal has merit; a dozen others pounce. Words like idiot, sheep, and propaganda fly faster than the article loads. No one asks what the commenter meant. The thread scrolls on—another small fire in a forest already smoldering.

It’s a small scene, but it captures something larger: how the public square has turned reactive by design. The digital environments where citizens now meet were built to reward intensity, not inquiry. Each click, share, and outrage serves an invisible metric that prizes attention over understanding.

Keep ReadingShow less
A woman typing on her laptop.

Pop-ups on federal websites blaming Democrats for the shutdown spark Hatch Act concerns, raising questions about neutrality in government communications.

Getty Images, Igor Suka

When Federal Websites Get Political: The Hatch Act in the Digital Age

As the federal government entered a shutdown on October 1st, a new controversy emerged over how federal agencies communicate during political standoffs. Pop-ups and banners appeared on agency websites blaming one side of Congress for the funding lapse, prompting questions about whether such messaging violated federal rules meant to keep government communications neutral. The episode has drawn bipartisan concern and renewed scrutiny of the Hatch Act, a 1939 law that governs political activity in federal workplaces.

The Shutdown and Federal Website Pop-ups

The government shutdown began after negotiations over the federal budget collapsed. Republicans, who control both chambers of Congress, needed Democratic support in the Senate to pass a series of funding bills, or Continuing Resolutions, but failed to reach an agreement before the deadline. In the hours before the shutdown took effect, the Department of Housing and Urban Development, or HUD, posted a full-screen red banner stating, “The Radical Left in Congress shut down the government. HUD will use available resources to help Americans in need.” Users could not access the website until clicking through the message.

Keep ReadingShow less
Congress Must Lead On AI While It Still Can
a computer chip with the letter a on top of it
Photo by Igor Omilaev on Unsplash

Congress Must Lead On AI While It Still Can

Last month, Matthew and Maria Raine testified before Congress, describing how their 16-year-old son confided suicidal thoughts to AI chatbots, only to be met with validation, encouragement, and even help drafting a suicide note. The Raines are among multiple families who have recently filed lawsuits alleging that AI chatbots were responsible for their children’s suicides. Their deaths, now at the center of lawsuits against AI companies, underscore a similar argument playing out in federal courts: artificial intelligence is no longer an abstraction of the future; it is already shaping life and death.

And these teens are not outliers. According to Common Sense Media, a nonprofit dedicated to improving the lives of kids and families, 72 percent of teenagers report using AI companions, often relying on them for emotional support. This dependence is developing far ahead of any emerging national safety standard.

Keep ReadingShow less
A person on using a smartphone.

With millions of child abuse images reported annually and AI creating new dangers, advocates are calling for accountability from Big Tech and stronger laws to keep kids safe online.

Getty Images, ljubaphoto

Parents: It’s Time To Get Mad About Online Child Sexual Abuse

Forty-five years ago this month, Mothers Against Drunk Driving had its first national press conference, and a global movement to stop impaired driving was born. MADD was founded by Candace Lightner after her 13-year-old daughter was struck and killed by a drunk driver while walking to a church carnival in 1980. Terms like “designated driver” and the slogan “Friends don’t let friends drive drunk” came out of MADD’s campaigning, and a variety of state and federal laws, like a lowered blood alcohol limit and legal drinking age, were instituted thanks to their advocacy. Over time, social norms evolved, and driving drunk was no longer seen as a “folk crime,” but a serious, conscious choice with serious consequences.

Movements like this one, started by fed-up, grieving parents working with law enforcement and law makers, worked to lower road fatalities nationwide, inspire similar campaigns in other countries, and saved countless lives.

Keep ReadingShow less