Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Parents: It’s Time To Get Mad About Online Child Sexual Abuse

Opinion

A person on using a smartphone.

With millions of child abuse images reported annually and AI creating new dangers, advocates are calling for accountability from Big Tech and stronger laws to keep kids safe online.

Getty Images, ljubaphoto

Forty-five years ago this month, Mothers Against Drunk Driving had its first national press conference, and a global movement to stop impaired driving was born. MADD was founded by Candace Lightner after her 13-year-old daughter was struck and killed by a drunk driver while walking to a church carnival in 1980. Terms like “designated driver” and the slogan “Friends don’t let friends drive drunk” came out of MADD’s campaigning, and a variety of state and federal laws, like a lowered blood alcohol limit and legal drinking age, were instituted thanks to their advocacy. Over time, social norms evolved, and driving drunk was no longer seen as a “folk crime,” but a serious, conscious choice with serious consequences.

Movements like this one, started by fed-up, grieving parents working with law enforcement and law makers, worked to lower road fatalities nationwide, inspire similar campaigns in other countries, and saved countless lives.


But today, one of the biggest dangers to children comes with almost no safeguards: the internet. Parents know the risks, yet there is no large-scale “movement” when it comes to keeping our kids safe online.

This is a big missed opportunity. The internet is not going anywhere, but in order to make it safer for children and young people, parents are key - and they need to get mad on a much larger scale.

In 2024, there were 20.5 million reports of child sexual abuse material made to the National Center for Missing and Exploited Children’s CyberTipline, and underreporting is a serious problem. These images represent real children who have been abused, their photos and videos of the abuse shared – exponentially – on platforms that we use every day. Add to that the rising number of teens who have died by suicide after being groomed and extorted, and the number of kids who are exposed to pornographic material on sites that are supposedly “safe” for children.

AI is complicating matters further, suggesting extreme dieting to teens and offering advice on how to commit suicide. According to Common Sense Media, 3 out of 4 kids have used an AI chatbot, and many parents have no idea.

Despite widespread acknowledgement of child sexual abuse, imagery, and exploitation on all major platforms, tech companies are still not required to proactively search for, detect, or remove content unless it is reported to them. Online safeguards are, by and large, voluntary, and tech companies are still rarely held accountable for crimes committed on their sites, creating a virtual playground for predators to groom children without consequences.

Much like the lax culture around drunk driving before MADD, the dangers online are often seen as an unfortunate risk that parents are forced to accept in order to let their children and teens exist in the digital world. Instead of anger, there is a sense of overwhelm and apathy at the scale and the ubiquity of online risks. Parents are mostly forced to throw up their hands, put in place whatever precautions they can, and just go along with it. This is unacceptable.

Congress is making some progress towards passing legislation that will help hold tech companies accountable and let law enforcement better prosecute these crimes. Other countries around the world, like Australia, the U.K., and Brazil, are starting to pass online safety legislation, too. But these achievements are largely uncoordinated, and they exist on a national scale, not a global one.

Since most Big Tech companies are based in the U.S., Congress must take the lead in making companies accountable for the risks children face online. We also need a collective, organic movement led by parents and the public that will drive a global movement for sustainable, meaningful change.

It is not up to parents to solve this crisis. But parents can – and should – be angry. And we must use that anger to fuel change. We must educate ourselves about the risks and not be afraid to talk to others about the risks our kids are facing. The tech companies will not bring themselves down, so parents, teachers, and adults who care about children must continue putting pressure on Congress to act. We can end online child sexual abuse and make the internet a much safer place for everyone, but only if we come together first.


Erin Nicholson is the strategic communications adviser for ChildFund International, a global nonprofit dedicated to protecting children online and offline. ChildFund launched the #TakeItDown campaign in 2023 to combat online child sexual abuse material. She is currently a Public Voices Fellow on Prevention of Child Sexual Abuse with The OpEd Project.

Read More

Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less
AI - Its Use, Misuse, and Regulation
Glowing ai chip on a circuit board.
Photo by Immo Wegmann on Unsplash

AI - Its Use, Misuse, and Regulation

There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.

First, the good uses. There is no question that AI can outthink human beings, regardless of how famous or knowledgeable, because of the amount of information it can process in a short amount of time. The most powerful accounts I've read have been in the field of medical research: doctors have fed facts into AI, asking for a diagnosis or a possible remedy, and AI has come up with remarkable answers beyond the human mind's capability.

Keep ReadingShow less
Overbroad AI Export Controls Risk Forfeiting the AI Race
a black keyboard with a blue button on it

Overbroad AI Export Controls Risk Forfeiting the AI Race

The nation that wins the global AI race will hold decisive military and economic advantages. That’s why President Trump’s January 2025 AI Action Plan declared: “It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

However, AI global dominance does not just mean producing the best AI systems. It also means that the American “AI Stack” – the layered collection of tools, technologies, and frameworks that organizations use to build, train, deploy, and manage artificial intelligence applications – will become the international standard for this world-changing technology. As such, advancing a commonsense export policy for American AI chips will play a decisive role in determining whether the United States remains embedded at the core of global AI development or is gradually displaced by rival systems.

Keep ReadingShow less
Digital generated image of green semi transparent AI word on white circuit board visualizing smart technology.

What can the success of SEMATECH teach us about winning the AI race? Explore how a bold U.S. public-private partnership revived the semiconductor industry—and why a similar model could be key to advancing AI innovation today.

Getty Images, Andriy Onufriyenko

A Proven Playbook for AI Leadership: Lessons from America’s Chip Comeback

Imagine waking up to this paragraph in your favorite newspaper:

The willingness of the U.S. government to eschew partisanship and undertake a bold experiment -- an experiment based on cooperation as opposed to traditional procurement, and with accountability standards rooted in trust instead of elaborate regulations -- has led the U.S. to a position of preeminence in an industry which is vital to our nation's security and economic well-being.

Keep ReadingShow less