Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Parents: It’s Time To Get Mad About Online Child Sexual Abuse

Opinion

A person on using a smartphone.

With millions of child abuse images reported annually and AI creating new dangers, advocates are calling for accountability from Big Tech and stronger laws to keep kids safe online.

Getty Images, ljubaphoto

Forty-five years ago this month, Mothers Against Drunk Driving had its first national press conference, and a global movement to stop impaired driving was born. MADD was founded by Candace Lightner after her 13-year-old daughter was struck and killed by a drunk driver while walking to a church carnival in 1980. Terms like “designated driver” and the slogan “Friends don’t let friends drive drunk” came out of MADD’s campaigning, and a variety of state and federal laws, like a lowered blood alcohol limit and legal drinking age, were instituted thanks to their advocacy. Over time, social norms evolved, and driving drunk was no longer seen as a “folk crime,” but a serious, conscious choice with serious consequences.

Movements like this one, started by fed-up, grieving parents working with law enforcement and law makers, worked to lower road fatalities nationwide, inspire similar campaigns in other countries, and saved countless lives.


But today, one of the biggest dangers to children comes with almost no safeguards: the internet. Parents know the risks, yet there is no large-scale “movement” when it comes to keeping our kids safe online.

This is a big missed opportunity. The internet is not going anywhere, but in order to make it safer for children and young people, parents are key - and they need to get mad on a much larger scale.

In 2024, there were 20.5 million reports of child sexual abuse material made to the National Center for Missing and Exploited Children’s CyberTipline, and underreporting is a serious problem. These images represent real children who have been abused, their photos and videos of the abuse shared – exponentially – on platforms that we use every day. Add to that the rising number of teens who have died by suicide after being groomed and extorted, and the number of kids who are exposed to pornographic material on sites that are supposedly “safe” for children.

AI is complicating matters further, suggesting extreme dieting to teens and offering advice on how to commit suicide. According to Common Sense Media, 3 out of 4 kids have used an AI chatbot, and many parents have no idea.

Despite widespread acknowledgement of child sexual abuse, imagery, and exploitation on all major platforms, tech companies are still not required to proactively search for, detect, or remove content unless it is reported to them. Online safeguards are, by and large, voluntary, and tech companies are still rarely held accountable for crimes committed on their sites, creating a virtual playground for predators to groom children without consequences.

Much like the lax culture around drunk driving before MADD, the dangers online are often seen as an unfortunate risk that parents are forced to accept in order to let their children and teens exist in the digital world. Instead of anger, there is a sense of overwhelm and apathy at the scale and the ubiquity of online risks. Parents are mostly forced to throw up their hands, put in place whatever precautions they can, and just go along with it. This is unacceptable.

Congress is making some progress towards passing legislation that will help hold tech companies accountable and let law enforcement better prosecute these crimes. Other countries around the world, like Australia, the U.K., and Brazil, are starting to pass online safety legislation, too. But these achievements are largely uncoordinated, and they exist on a national scale, not a global one.

Since most Big Tech companies are based in the U.S., Congress must take the lead in making companies accountable for the risks children face online. We also need a collective, organic movement led by parents and the public that will drive a global movement for sustainable, meaningful change.

It is not up to parents to solve this crisis. But parents can – and should – be angry. And we must use that anger to fuel change. We must educate ourselves about the risks and not be afraid to talk to others about the risks our kids are facing. The tech companies will not bring themselves down, so parents, teachers, and adults who care about children must continue putting pressure on Congress to act. We can end online child sexual abuse and make the internet a much safer place for everyone, but only if we come together first.


Erin Nicholson is the strategic communications adviser for ChildFund International, a global nonprofit dedicated to protecting children online and offline. ChildFund launched the #TakeItDown campaign in 2023 to combat online child sexual abuse material. She is currently a Public Voices Fellow on Prevention of Child Sexual Abuse with The OpEd Project.

Read More

Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less
Trump Administration’s Escalating Attacks on Media Raise Concerns about Trust in Media, Self-Censorship

U.S. President Donald Trump speaks to reporters before boarding Air Force One at Palm Beach International Airport on March 23, 2026 in West Palm Beach, Florida.

(Photo by Roberto Schmidt/Getty Images)

Trump Administration’s Escalating Attacks on Media Raise Concerns about Trust in Media, Self-Censorship

WASHINGTON – Independent journalist Georgia Fort filmed federal agents outside of her home on Jan. 30. They were coming to arrest her in connection with reporting and filming at an anti-ICE protest in Minneapolis, Minn., almost two weeks prior.

“I don’t feel like I have my First Amendment right as a member of the press,” said Fort in video footage shared with CNN.

Keep ReadingShow less