Skip to content
Search

Latest Stories

Top Stories

Elon Musk’s bid to protect free speech threatens another First Amendment right

Opinion

Elon Musk, First Amendment
STR/NurPhoto via Getty Images

Kohli is the advocacy associate at Interfaith Alliance, a national organization dedicated to protecting the integrity of both religion and democracy in the United States.

For better or for worse, I grew up with social media. I remember jumping on Google+ after school and continuing conversations with friends. I would use Facebook groups to organize class gifts for favorite teachers. Snapchat was for sending funny pictures and maintaining streaks as a way to qualify friendships. Of course, social media platforms are so much more than places to make silly posts for your friends. Platforms are part of a complicated information ecosystem, in which some parts thrive as healthy forums, and other parts spread lies and misinformation, allowing hate and harassment to thrive on and off line.

I also grew up with Indian-American, immigrant parents. My grandparents tell me the beauty of this country is that people of so many backgrounds and cultures can come here to coexist. But social media, for all its strengths and potential, gives an outsized voice to people who want to spread hate and sow division. When I scroll through the depths of Twitter or accidentally click on a YouTube video that sends my recommendations down a rabbit hole of extremism, it’s clear to me that each of us is constantly in danger of being pushed into echo chambers of hate.

Elon Musk’s plans for Twitter threaten the very coexistence my grandparents celebrated. Musk’s fumbling leadership of the influential social media platform risks the democratic promise of our country. If hate is allowed to run rampant and millions of users feel unsafe, we are failing to live up to that promise.


Musk has positioned himself as a champion of the First Amendment right to freedom of speech. This framing could not be further from the truth. Instead, he has allowed extremists and people with hateful ideologies to expand their reach on Twitter. Musk fired staff in charge of dealing with hateful content on the platform, leaving the company too short-staffed to handle the increase in harmful posts. In the 12-hour period after Musk’s ownership of Twitter was finalized, the use of derogatory language toward Black people increased almost 500 percent.

While concerns about Musk’s damaging impact on free speech have been well-documented, the risks to another fundamental right have been overlooked: freedom of religion, which recognizes the right for people of all faiths or none to practice what they believe. Social media is so intertwined with our lives offline that threats to religious freedom are no longer confined to the physical world. Every day that hate is allowed to run rampant and target communities online, the freedom to believe as we choose erodes.

In recent years, harmful content on social media has manifested in physical acts of violence targeting vulnerable communities. A 2021 report from the Anti-Defamation League exposed the harmful effects of online hate on different communities, from an increase in violence against Asian Americans, to antisemitic harassment directed at Jewish members of Congress, to the quadrupling of hateful Facebook posts against African Americans after the murder of George Floyd.

There are too many examples of real-world violence committed by young social media users who encountered increasingly extremist content online. The perpetrator of the devastating attack at a supermarket in a predominantly Black neighborhood in Buffalo, N.Y., streamed the massacre on Twitch. The shooter wrote a manifesto on Google Docs filled with white supremacist ideology, stating that he was radicalized on 4chan in 2020. The Twitch livestream was taken down in just two minutes, but the video remained on Facebook for over 10 hours, allowing 46,000 people to share it. His actions, and the failure of platforms to identify and take down content like this immediately, created further extremist material for other users to view.

For better or for worse, social media is the most accessible way for people to connect online. Our government has an obligation to protect people of all backgrounds and identities. As backlash against content moderation comes to a head on Twitter, there’s no telling how other platforms might adjust their policies in the future. The national conversation around what’s happening with Twitter is laser-focused on the whims of a CEO who doesn’t seem to understand what he wants. All the while, real people and communities are being hurt.

While people like Musk play games with the ever-growing universe we’ve created online, our government must devote real time and resources to taming the giant that is the tech industry. Without regulation, and while people from different faiths, backgrounds and identities are harassed on and off social media, this country fails to be a safe haven for the people who need it most. Big Tech and its social media platforms are only getting started – we must ensure that this industry’s progress does not come at the cost of our most sacred freedoms.

Read More

MQ-9 Predator Drones Hunt Migrants at the Border
Way into future, RPA Airmen participate in Red Flag 16-2 > Creech ...

MQ-9 Predator Drones Hunt Migrants at the Border

FT HUACHUCA, Ariz. - Inside a windowless and dark shipping container turned into a high-tech surveillance command center, two analysts peered at their own set of six screens that showed data coming in from an MQ-9 Predator B drone. Both were looking for two adults and a child who had crossed the U.S.-Mexico border and had fled when a Border Patrol agent approached in a truck.

Inside the drone hangar on the other side of the Fort Huachuca base sat another former shipping container, this one occupied by a drone pilot and a camera operator who pivoted the drone's camera to scan nine square miles of shrubs and saguaros for the migrants. Like the command center, the onetime shipping container was dark, lit only by the glow of the computer screens.

Keep ReadingShow less
A child holding a smartphone.

As children scroll through endless violence on their screens, experts warn of a mental health crisis fueled by trauma, desensitization, and the erosion of empathy.

Trauma Through Screens: Are We Failing the Children?

The first time I watched the video of George Floyd’s final moments as he gasped for air, recorded on a smartphone for the world to witness, it was May 2020, and it was gut-wrenching to see a man’s life end in such a horrific way with just a click.

That single video, captured by a bystander, spread across over 1.3 billion screens and sent a shockwave throughout the country. It forced people to confront the brutality of racial injustice in a way that could not be ignored, filtered, or explained away.

Keep ReadingShow less
A person on their phone, using a type of artificial intelligence.

AI is transforming the workplace faster than ever. Experts warn that automation could reshape jobs, wages, and opportunities for millions of American workers.

Getty Images, d3sign

AI Reshapes the American Workplace—But Where Are the Jobs?

In recent years, American workers have been going through an unprecedented experiment in how we work. During the COVID pandemic and social distancing, U.S. businesses embraced the latest online technologies to vastly expand remote work. That, in turn, ushered in the slow creep of artificial intelligence (AI) applications into every crack and seam of society, including in the workplace.

If 2023 was about increasing adoption of AI coming out of the pandemic, experts are saying 2025-26 will be when companies implement deeper changes in the workplace based on ever more pervasive AI.

Keep ReadingShow less
A child looking at a cellphone at night.

AI is changing childhood. Kevin Frazier explains why it's critical for parents and mentors to start having the “AI talk” and teach kids safe, responsible AI use.

Getty Images, Elva Etienne

The New Talk: The Need To Discuss AI With Kids

“[I]t is a massively more powerful and scary thing than I knew about.” That’s how Adam Raine’s dad characterized ChatGPT when he reviewed his son’s conversations with the AI tool. Adam tragically died by suicide. His parents are now suing OpenAI and Sam Altman, the company’s CEO, based on allegations that the tool contributed to his death.

This tragic story has rightfully caused a push for tech companies to institute changes and for lawmakers to institute sweeping regulations. While both of those strategies have some merit, computer code and AI-related laws will not address the underlying issue: our kids need guidance from their parents, educators, and mentors about how and when to use AI.

Keep ReadingShow less