Skip to content
Search

Latest Stories

Top Stories

Parents: It’s Time To Get Mad About Online Child Sexual Abuse

Opinion

A person on using a smartphone.

With millions of child abuse images reported annually and AI creating new dangers, advocates are calling for accountability from Big Tech and stronger laws to keep kids safe online.

Getty Images, ljubaphoto

Forty-five years ago this month, Mothers Against Drunk Driving had its first national press conference, and a global movement to stop impaired driving was born. MADD was founded by Candace Lightner after her 13-year-old daughter was struck and killed by a drunk driver while walking to a church carnival in 1980. Terms like “designated driver” and the slogan “Friends don’t let friends drive drunk” came out of MADD’s campaigning, and a variety of state and federal laws, like a lowered blood alcohol limit and legal drinking age, were instituted thanks to their advocacy. Over time, social norms evolved, and driving drunk was no longer seen as a “folk crime,” but a serious, conscious choice with serious consequences.

Movements like this one, started by fed-up, grieving parents working with law enforcement and law makers, worked to lower road fatalities nationwide, inspire similar campaigns in other countries, and saved countless lives.


But today, one of the biggest dangers to children comes with almost no safeguards: the internet. Parents know the risks, yet there is no large-scale “movement” when it comes to keeping our kids safe online.

This is a big missed opportunity. The internet is not going anywhere, but in order to make it safer for children and young people, parents are key - and they need to get mad on a much larger scale.

In 2024, there were 20.5 million reports of child sexual abuse material made to the National Center for Missing and Exploited Children’s CyberTipline, and underreporting is a serious problem. These images represent real children who have been abused, their photos and videos of the abuse shared – exponentially – on platforms that we use every day. Add to that the rising number of teens who have died by suicide after being groomed and extorted, and the number of kids who are exposed to pornographic material on sites that are supposedly “safe” for children.

AI is complicating matters further, suggesting extreme dieting to teens and offering advice on how to commit suicide. According to Common Sense Media, 3 out of 4 kids have used an AI chatbot, and many parents have no idea.

Despite widespread acknowledgement of child sexual abuse, imagery, and exploitation on all major platforms, tech companies are still not required to proactively search for, detect, or remove content unless it is reported to them. Online safeguards are, by and large, voluntary, and tech companies are still rarely held accountable for crimes committed on their sites, creating a virtual playground for predators to groom children without consequences.

Much like the lax culture around drunk driving before MADD, the dangers online are often seen as an unfortunate risk that parents are forced to accept in order to let their children and teens exist in the digital world. Instead of anger, there is a sense of overwhelm and apathy at the scale and the ubiquity of online risks. Parents are mostly forced to throw up their hands, put in place whatever precautions they can, and just go along with it. This is unacceptable.

Congress is making some progress towards passing legislation that will help hold tech companies accountable and let law enforcement better prosecute these crimes. Other countries around the world, like Australia, the U.K., and Brazil, are starting to pass online safety legislation, too. But these achievements are largely uncoordinated, and they exist on a national scale, not a global one.

Since most Big Tech companies are based in the U.S., Congress must take the lead in making companies accountable for the risks children face online. We also need a collective, organic movement led by parents and the public that will drive a global movement for sustainable, meaningful change.

It is not up to parents to solve this crisis. But parents can – and should – be angry. And we must use that anger to fuel change. We must educate ourselves about the risks and not be afraid to talk to others about the risks our kids are facing. The tech companies will not bring themselves down, so parents, teachers, and adults who care about children must continue putting pressure on Congress to act. We can end online child sexual abuse and make the internet a much safer place for everyone, but only if we come together first.


Erin Nicholson is the strategic communications adviser for ChildFund International, a global nonprofit dedicated to protecting children online and offline. ChildFund launched the #TakeItDown campaign in 2023 to combat online child sexual abuse material. She is currently a Public Voices Fellow on Prevention of Child Sexual Abuse with The OpEd Project.

Read More

King, Pope, Jedi, Superman: Trump’s Social Media Images Exclusively Target His Base and Try To Blur Political Reality

Two Instagram images put out by the White House.

White House Instagram

King, Pope, Jedi, Superman: Trump’s Social Media Images Exclusively Target His Base and Try To Blur Political Reality

A grim-faced President Donald J. Trump looks out at the reader, under the headline “LAW AND ORDER.” Graffiti pictured in the corner of the White House Facebook post reads “Death to ICE.” Beneath that, a photo of protesters, choking on tear gas. And underneath it all, a smaller headline: “President Trump Deploys 2,000 National Guard After ICE Agents Attacked, No Mercy for Lawless Riots and Looters.”

The official communication from the White House appeared on Facebook in June 2025, after Trump sent in troops to quell protests against Immigration and Customs Enforcement agents in Los Angeles. Visually, it is melodramatic, almost campy, resembling a TV promotion.

Keep ReadingShow less
When the Lights Go Out — and When They Never Do
a person standing in a doorway with a light coming through it

When the Lights Go Out — and When They Never Do

The massive outage that crippled Amazon Web Services this past October 20th sent shockwaves through the digital world. Overnight, the invisible backbone of our online lives buckled: Websites went dark, apps froze, transactions stalled, and billions of dollars in productivity and trust evaporated. For a few hours, the modern economy’s nervous system failed. And in that silence, something was revealed — how utterly dependent we have become on a single corporate infrastructure to keep our civilization’s pulse steady.

When Amazon sneezes, the world catches a fever. That is not a mark of efficiency or innovation. It is evidence of recklessness. For years, business leaders have mocked antitrust reformers like FTC Chair Lina Khan, dismissing warnings about the dangers of monopoly concentration as outdated paranoia. But the AWS outage was not a cyberattack or an act of God — it was simply the predictable outcome of a world that has traded resilience for convenience, diversity for cost-cutting, and independence for “efficiency.” Executives who proudly tout their “risk management frameworks” now find themselves helpless before a single vendor’s internal failure.

Keep ReadingShow less
Fear of AI Makes for Bad Policy
Getty Images

Fear of AI Makes for Bad Policy

Fear is the worst possible response to AI. Actions taken out of fear are rarely a good thing, especially when it comes to emerging technology. Empirically-driven scrutiny, on the other hand, is a savvy and necessary reaction to technologies like AI that introduce great benefits and harms. The difference is allowing emotions to drive policy rather than ongoing and rigorous evaluation.

A few reminders of tech policy gone wrong, due, at least in part, to fear, helps make this point clear. Fear is what has led the US to become a laggard in nuclear energy, while many of our allies and adversaries enjoy cheaper, more reliable energy. Fear is what explains opposition to autonomous vehicles in some communities, while human drivers are responsible for 120 deaths per day, as of 2022. Fear is what sustains delays in making drones more broadly available, even though many other countries are tackling issues like rural access to key medicine via drones.

Keep ReadingShow less
A child looking at a smartphone.

With autism rates doubling every decade, scientists are reexamining environmental and behavioral factors. Could the explosion of social media use since the 1990s be influencing neurodevelopment? A closer look at the data, the risks, and what research must uncover next.

Getty Images, Arindam Ghosh

The Increase in Autism and Social Media – Coincidence or Causal?

Autism has been in the headlines recently because of controversy over Robert F. Kennedy, Jr's statements. But forgetting about Kennedy, autism is headline-worthy because of the huge increase in its incidence over the past two decades and its potential impact on not just the individual children but the health and strength of our country.

In the 1990s, a new definition of autism—ASD (Autism Spectrum Disorder)—was universally adopted. Initially, the prevalence rate was pretty stable. In the year 2,000, with this broader definition and better diagnosis, the CDC estimated that one in 150 eight-year-olds in the U.S. had an autism spectrum disorder. (The reports always study eight-year-olds, so this data was for children born in 1992.)

Keep ReadingShow less