Skip to content
Search

Latest Stories

Follow Us:
Top Stories

How AI Deepfakes in Classrooms Expose a Crisis of Accountability and Civic Trust

A person on their phone, using a type of artificial intelligence.

AI-generated “nudification” is no longer a distant threat—it’s harming students now. As deepfake pornography spreads in schools nationwide, educators are left to confront a growing crisis that outpaces laws, platforms, and parental awareness.

Getty Images, d3sign

While public outrage flares when AI tools like Elon Musk’s Grok generate sexualized images of adults on X—often without consent—schools have been dealing with this harm for years. For school-aged children, AI-generated “nudification” is not a future threat or an abstract tech concern; it is already shaping their daily lives.

Last month, that reality became impossible to ignore in Lafourche Parish, Louisiana. A father sued the school district after several middle school boys circulated AI-generated pornographic images of eight female classmates, including his 13-year-old daughter. When the girl confronted one of the boys and punched him on a school bus, she was expelled. The boy who helped create and spread the images faced no formal consequences.


The case ignited debate over internet safety, deepfake pornography, and school discipline. But it also exposed a deeper truth we are reluctant to confront: decisions made by powerful tech leaders are reshaping childhood faster than schools, parents, or laws can respond—and schools are being left to manage the fallout without the tools they need.

Recent survey data confirms this is not an isolated incident. Researchers found that AI “nudification” is increasingly common in schools, used to harass, humiliate, and exert power over peers. What adults may still perceive as shocking misconduct has, for many students, become disturbingly normalized.

In nearly all 50 states, and Washington, D.C., creating and distributing child sexual abuse material is a crime. AI-generated deepfakes, however, present a unique challenge. These images are easy to create, can be shared widely in seconds, and often disappear from platforms just as quickly. Even when perpetrators are identified, the speed, volume, and anonymity of digital sharing make enforcement extraordinarily difficult.

Expecting the legal system to track and prosecute every child and teenager contributing to this epidemic is neither realistic nor effective. If we focus only on punishment after harm occurs, we will always be too late. The goal must be prevention.

Research shows that 31 percent of young people are familiar with deepfake nudes, and one in eight knows someone who has been victimized by them. Girls account for 99 percent of the victims. One in 17 youth and young adults has been directly targeted by AI-generated sexual images—roughly one student in every middle school classroom in the United States. This is not a fringe issue or a moral panic. It is a widespread form of sexual harassment enabled by technology that outpaces our safeguards.

Students need clear guidance to navigate a digital world where a single harmless photo can be transformed into a weapon—sometimes without malicious intent, but with devastating consequences. Yet only 28 states and the District of Columbia require sex education, and just 12 include instruction on consent. This gap has created ideal conditions for the deepfake crisis to flourish.

Without education on bodily autonomy, digital boundaries, consent, and meaningful safeguards from tech companies, young people are left unequipped to recognize the harm in creating and sharing explicit AI images. They are even less prepared to respond when they or their peers become targets.

As a mother, I resist the urge to say simply that parents need to talk to their kids. Parents are essential, but many lack the technical knowledge, consistent access, or awareness needed to explain how these images are created, how quickly they spread, and the profound psychological harm they cause. That is where schools must step in.

As a former middle school teacher, I have sat across from parents explaining the seriousness of emerging online trends long before they reached Facebook groups, GroupMe chats, or parent blogs. Schools are often the first places where this harm appears—and they are uniquely positioned to respond.

Schools can and should provide structured, age-appropriate education that reaches all students, ensures consistent messaging, and creates space for honest discussion. Lessons should include:

  • How popular apps and tools generate AI images
  • The legal ramifications and potential criminal liability
  • The deep psychological and emotional harm inflicted on victims
  • Clear school- or district-wide reporting protocols
  • The rights of victims and available supports

Educators already manage cyberbullying, hunger, school violence, and adolescent mental health. Some may ask whether this is one burden too many. But integrating education about AI-generated pornography is not an added responsibility—it is a necessary evolution of student safety in a digital age.

Unlike many victims, both the woman targeted on X and the 13-year-old girl in Lafourche Parish reported their abuse. But for every report, how many students suffer in silence—ashamed, afraid, or unsure whether adults will take them seriously?

While platforms like X attempt to normalize or minimize the harm of deepfake nudification, educators must push back against the idea that this behavior is accessible, acceptable, or consequence-free. That message does not stay online. It reaches classrooms, school buses, and lunch tables. When perpetrators face little accountability and victims are punished for reacting, the lesson students learn is devastatingly clear.

If tech leaders will not fully account for the damage their products enable, schools must act—not through harsher punishment, but through education. Teaching AI literacy, consent, and respect is our strongest defense against a problem that is only growing. Prevention, not discipline, is how we protect children—and how we ensure no more students have to fight back just to be heard.


Julienne Louis-Anderson is a former educator, curriculum writer, and educational equity advocate. She is also a Public Voices Fellow of The OpEd Project in Partnership with the National Black Child Development Institute


Read More

A close up of a person's hands typing on a laptop.

As AI reshapes the labor market, workers must think like entrepreneurs. Explore skills gaps, apprenticeships, and policy reforms shaping the future of work.

Getty Images, Maria Korneeva

We’re All Entrepreneurs Now: Learning, Pivoting, and Thriving the Age of AI

What do a recent grad, a disenchanted employee, and a parent returning to the workforce all have in common? They’re each trying to determine which skills are in demand and how they can convince employers that they are competent in those fields. This is easier said than done.

Recent grads point to transcripts lined with As to persuade firms that they can add value. Firms, well aware of grade inflation, may scoff.

Keep Reading Show less
President Trump Should Put America’s AI Interests First
A close up of a blue eyeball in the dark
Photo by Luke Jones on Unsplash

President Trump Should Put America’s AI Interests First

In some ways, the second Trump presidency has been as expected–from border security to reducing the size and scope of the federal government.

In other ways, the president has not delivered on a key promise to the MAGA base. Rather than waging a war against Silicon Valley’s influence in American politics, the administration has, by and large, done what Big Tech wants–despite its long history of anti-Trumpism in the most liberal corners of San Francisco. Not only are federal agencies working in sync with Amazon, OpenAI, and Palantir, but the president has carved out key alliances with Mark Zuckerberg, Jensen Huang, and other AI evangelists to promote AI dominance at all costs.

Keep Reading Show less
medical expenses

"The promise of AI-powered tools—from personalized health monitoring to adaptive educational support—depends on access to quality data," writes Kevin Frazier.

Prapass Pulsub/Getty Images

Your Data, Your Choice: Why Americans Need the Right to Share

Outdated, albeit well-intentioned data privacy laws create the risk that many Americans will miss out on proven ways in which AI can improve their quality of life. Thanks to advances in AI, we possess incredible opportunities to use our personal information to aid the development of new tools that can lead to better health care, education, and economic advancement. Yet, HIPAA (the Health Information Portability and Accountability Act), FERPA (The Family Educational Rights and Privacy Act), and a smattering of other state and federal laws complicate the ability of Americans to do just that.

The result is a system that claims to protect our privacy interests while actually denying us meaningful control over our data and, by extension, our well-being in the Digital Age.

Keep Reading Show less
New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal
Getty Images, Kmatta

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal

Background

The Health Insurance Portability and Accountability Act (HIPAA) was enacted in 1996 to protect sensitive health information from being disclosed without patients’ consent. Under this act, a patient’s privacy is safeguarded through the enforcement of strict standards on managing, transmitting, and storing health information.

Keep Reading Show less