Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Parents: It’s Time To Get Mad About Online Child Sexual Abuse

Opinion

A person on using a smartphone.

With millions of child abuse images reported annually and AI creating new dangers, advocates are calling for accountability from Big Tech and stronger laws to keep kids safe online.

Getty Images, ljubaphoto

Forty-five years ago this month, Mothers Against Drunk Driving had its first national press conference, and a global movement to stop impaired driving was born. MADD was founded by Candace Lightner after her 13-year-old daughter was struck and killed by a drunk driver while walking to a church carnival in 1980. Terms like “designated driver” and the slogan “Friends don’t let friends drive drunk” came out of MADD’s campaigning, and a variety of state and federal laws, like a lowered blood alcohol limit and legal drinking age, were instituted thanks to their advocacy. Over time, social norms evolved, and driving drunk was no longer seen as a “folk crime,” but a serious, conscious choice with serious consequences.

Movements like this one, started by fed-up, grieving parents working with law enforcement and law makers, worked to lower road fatalities nationwide, inspire similar campaigns in other countries, and saved countless lives.


But today, one of the biggest dangers to children comes with almost no safeguards: the internet. Parents know the risks, yet there is no large-scale “movement” when it comes to keeping our kids safe online.

This is a big missed opportunity. The internet is not going anywhere, but in order to make it safer for children and young people, parents are key - and they need to get mad on a much larger scale.

In 2024, there were 20.5 million reports of child sexual abuse material made to the National Center for Missing and Exploited Children’s CyberTipline, and underreporting is a serious problem. These images represent real children who have been abused, their photos and videos of the abuse shared – exponentially – on platforms that we use every day. Add to that the rising number of teens who have died by suicide after being groomed and extorted, and the number of kids who are exposed to pornographic material on sites that are supposedly “safe” for children.

AI is complicating matters further, suggesting extreme dieting to teens and offering advice on how to commit suicide. According to Common Sense Media, 3 out of 4 kids have used an AI chatbot, and many parents have no idea.

Despite widespread acknowledgement of child sexual abuse, imagery, and exploitation on all major platforms, tech companies are still not required to proactively search for, detect, or remove content unless it is reported to them. Online safeguards are, by and large, voluntary, and tech companies are still rarely held accountable for crimes committed on their sites, creating a virtual playground for predators to groom children without consequences.

Much like the lax culture around drunk driving before MADD, the dangers online are often seen as an unfortunate risk that parents are forced to accept in order to let their children and teens exist in the digital world. Instead of anger, there is a sense of overwhelm and apathy at the scale and the ubiquity of online risks. Parents are mostly forced to throw up their hands, put in place whatever precautions they can, and just go along with it. This is unacceptable.

Congress is making some progress towards passing legislation that will help hold tech companies accountable and let law enforcement better prosecute these crimes. Other countries around the world, like Australia, the U.K., and Brazil, are starting to pass online safety legislation, too. But these achievements are largely uncoordinated, and they exist on a national scale, not a global one.

Since most Big Tech companies are based in the U.S., Congress must take the lead in making companies accountable for the risks children face online. We also need a collective, organic movement led by parents and the public that will drive a global movement for sustainable, meaningful change.

It is not up to parents to solve this crisis. But parents can – and should – be angry. And we must use that anger to fuel change. We must educate ourselves about the risks and not be afraid to talk to others about the risks our kids are facing. The tech companies will not bring themselves down, so parents, teachers, and adults who care about children must continue putting pressure on Congress to act. We can end online child sexual abuse and make the internet a much safer place for everyone, but only if we come together first.


Erin Nicholson is the strategic communications adviser for ChildFund International, a global nonprofit dedicated to protecting children online and offline. ChildFund launched the #TakeItDown campaign in 2023 to combat online child sexual abuse material. She is currently a Public Voices Fellow on Prevention of Child Sexual Abuse with The OpEd Project.

Read More

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal
Getty Images, Kmatta

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal

Background

The Health Insurance Portability and Accountability Act (HIPAA) was enacted in 1996 to protect sensitive health information from being disclosed without patients’ consent. Under this act, a patient’s privacy is safeguarded through the enforcement of strict standards on managing, transmitting, and storing health information.

Keep ReadingShow less
Two people looking at screens.

A case for optimism, risk-taking, and policy experimentation in the age of AI—and why pessimism threatens technological progress.

Getty Images, Andriy Onufriyenko

In Defense of AI Optimism

Society needs people to take risks. Entrepreneurs who bet on themselves create new jobs. Institutions that gamble with new processes find out best to integrate advances into modern life. Regulators who accept potential backlash by launching policy experiments give us a chance to devise laws that are based on evidence, not fear.

The need for risk taking is all the more important when society is presented with new technologies. When new tech arrives on the scene, defense of the status quo is the easier path--individually, institutionally, and societally. We are all predisposed to think that the calamities, ailments, and flaws we experience today--as bad as they may be--are preferable to the unknowns tied to tomorrow.

Keep ReadingShow less
Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less