Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The Selective Sanctity of Death: When Empathy Depends on Skin Color

Opinion

news app
New platforms help overcome biased news reporting
Tero Vesalainen/Getty Images

Rampant calls to avoid sharing the video of Charlie Kirk’s death have been swift and emphatic across social media. “We need to keep our souls clean,” journalists plead. “Where are social media’s content moderators?” “How did we get so desensitized?” The moral outrage is palpable; the demands for human dignity urgent and clear.

But as a Black woman who has been forced to witness the constant virality of Black death, I must ask: where was this widespread anger for George Floyd? For Philando Castile? For Daunte Wright? For Tyre Nichols?


The contrast shows something deeply disturbing about how America processes death, trauma, and whose humanity deserves protection. When Charlie Kirk was killed, the immediate response centered on his humanity. He was a loving father, devoted husband, and tragic loss. His past controversial statements—comparing Islam’s Prophet Muhammad to Jeffrey Epstein, claiming white Americans were under attack—are now treated as off-limits during this time of mourning. When Black people are murdered, the questions arrive before any centering of personhood: What were they doing? Did they have a criminal record? Were they resisting arrest? Why didn’t they just comply?

Black victims get investigated posthumously, their past mistakes weaponized to justify their deaths before their bodies are cold. Trayvon Martin’s family had to prove he was a “good kid.” George Floyd’s criminal history became national news. The same publications now demanding privacy and respect for Kirk’s family had no problem consuming, sharing, and endlessly analyzing footage of Black people's final moments.

And why wouldn’t they? The economics of Black death are undeniable. A Pew Research Center study showed that “a large share of Americans (88%)—including about nine-in-ten each among White, Black, Republican and Democratic adults” have watched videos of Black death. Google trends shows that the murder of Black people is “amongst the most popular searches in Google’s history.” Images of Black people being killed by police often “garner over 2.4 million clicks in 24 hours, and the average ‘cost per click’ often reaches $6 per click,” making the virality of Black death not only incentivized but nearly guaranteed.

Many of the same voices now demanding “decency” around Kirk’s death were notably silent when Black death videos were being monetized and shared endlessly. The commodification shows a fundamental hierarchy of whose life has inherent value versus whose death serves a utilitarian purpose.

Watch how the media frames these deaths differently. Kirk was killed or assassinated—active language that centers the crime and demands justice. Black victims often “died during a police encounter” or “lost their lives in an incident”—passive language that obscures responsibility. Kirk’s killer is a shooter or assassin. Police who kill Black people are “officers involved in the incident.”

When white people die violently, their death is sacred, an untouchable tragedy demanding reverence and privacy. When Black people die violently, their deaths are educational content, evidence to be analyzed, footage to be replayed during panel discussions, and content to be dissected for lessons about American racism.

George Floyd’s death wasn’t treated as a private family tragedy deserving dignity. It became required viewing for America’s racial education. His final moments were played on loop, analyzed frame by frame, and transformed into memes and political statements. The same media outlets now calling for restraint around Kirk’s death video had no qualms about turning Floyd’s murder into content.

The family treatment differs drastically, too. Kirk’s family gets immediate protection from scrutiny. Their grief is respected, their privacy defended. Black families get thrust into the spotlight, forced to become activists and advocates while processing trauma. They must educate America about their loved one’s humanity while Kirk’s family gets to grieve privately.

Even the calls for justice follow different patterns. Kirk’s death demands swift action, comprehensive investigation, and retribution. Black deaths spark debates about “both sides,” calls for more information, and suggestions that we wait for all the facts. The urgency is different, the moral clarity conditional.

This hierarchy of death reflects America’s hierarchy of life. Some deaths are tragedies that unite us in grief. Others become teachable moments that divide us in debate. Some victims get privacy and dignity in death. Others get transformed into hashtags and causes.

The performative outrage around Kirk’s death video exposes this hypocrisy perfectly. Where was this moral clarity when Black death became America’s most consumed content?

Sanctity of death for Black victims would mean their final moments stay sacred, not viral. It would mean their families get privacy to grieve instead of becoming public educators about their loved one's humanity. It would mean their deaths inspire justice, not clicks. But that’s not where we are. America has denied Black victims these basic dignities while demanding them instantly for others.

This racial hierarchy of death is one of America’s most revealing double standards. And so, the question isn’t whether we should share videos of death—we shouldn’t. The question is why our answer changes depending on whose death we’re watching.

Stephanie Toliver is a Public Voices Fellow and a member of the OpEd Alumni Project sponsored by the University of Illinois.

Read More

Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less