Skip to content
Search

Latest Stories

Follow Us:
Top Stories

How AI Deepfakes in Classrooms Expose a Crisis of Accountability and Civic Trust

A person on their phone, using a type of artificial intelligence.

AI-generated “nudification” is no longer a distant threat—it’s harming students now. As deepfake pornography spreads in schools nationwide, educators are left to confront a growing crisis that outpaces laws, platforms, and parental awareness.

Getty Images, d3sign

While public outrage flares when AI tools like Elon Musk’s Grok generate sexualized images of adults on X—often without consent—schools have been dealing with this harm for years. For school-aged children, AI-generated “nudification” is not a future threat or an abstract tech concern; it is already shaping their daily lives.

Last month, that reality became impossible to ignore in Lafourche Parish, Louisiana. A father sued the school district after several middle school boys circulated AI-generated pornographic images of eight female classmates, including his 13-year-old daughter. When the girl confronted one of the boys and punched him on a school bus, she was expelled. The boy who helped create and spread the images faced no formal consequences.


The case ignited debate over internet safety, deepfake pornography, and school discipline. But it also exposed a deeper truth we are reluctant to confront: decisions made by powerful tech leaders are reshaping childhood faster than schools, parents, or laws can respond—and schools are being left to manage the fallout without the tools they need.

Recent survey data confirms this is not an isolated incident. Researchers found that AI “nudification” is increasingly common in schools, used to harass, humiliate, and exert power over peers. What adults may still perceive as shocking misconduct has, for many students, become disturbingly normalized.

In nearly all 50 states and Washington, D.C., creating and distributing child sexual abuse material is a crime. AI-generated deepfakes, however, present a unique challenge. These images are easy to create, can be shared widely in seconds, and often disappear from platforms just as quickly. Even when perpetrators are identified, the speed, volume, and anonymity of digital sharing make enforcement extraordinarily difficult.

Expecting the legal system to track and prosecute every child and teenager contributing to this epidemic is neither realistic nor effective. If we focus only on punishment after harm occurs, we will always be too late. The goal must be prevention.

Research shows that 31 percent of young people are familiar with deepfake nudes, and one in eight knows someone who has been victimized by them. Girls account for 99 percent of the victims. One in 17 youth and young adults has been directly targeted by AI-generated sexual images—roughly one student in every middle school classroom in the United States. This is not a fringe issue or a moral panic. It is a widespread form of sexual harassment enabled by technology that outpaces our safeguards.

Students need clear guidance to navigate a digital world where a single harmless photo can be transformed into a weapon—sometimes without malicious intent, but with devastating consequences. Yet only 28 states and the District of Columbia require sex education, and just 12 include instruction on consent. This gap has created ideal conditions for the deepfake crisis to flourish.

Without education on bodily autonomy, digital boundaries, consent, and meaningful safeguards from tech companies, young people are left unequipped to recognize the harm in creating and sharing explicit AI images. They are even less prepared to respond when they or their peers become targets.

As a mother, I resist the urge to say simply that parents need to talk to their kids. Parents are essential, but many lack the technical knowledge, consistent access, or awareness needed to explain how these images are created, how quickly they spread, and the profound psychological harm they cause. That is where schools must step in.

As a former middle school teacher, I have sat across from parents explaining the seriousness of emerging online trends long before they reached Facebook groups, GroupMe chats, or parent blogs. Schools are often the first places where this harm appears—and they are uniquely positioned to respond.

Schools can and should provide structured, age-appropriate education that reaches all students, ensures consistent messaging, and creates space for honest discussion. Lessons should include:

  • How popular apps and tools generate AI images
  • The legal ramifications and potential criminal liability
  • The deep psychological and emotional harm inflicted on victims
  • Clear school- or district-wide reporting protocols
  • The rights of victims and available supports

Educators already manage cyberbullying, hunger, school violence, and adolescent mental health. Some may ask whether this is one burden too many. But integrating education about AI-generated pornography is not an added responsibility—it is a necessary evolution of student safety in a digital age.

Unlike many victims, both the woman targeted on X and the 13-year-old girl in Lafourche Parish reported their abuse. But for every report, how many students suffer in silence—ashamed, afraid, or unsure whether adults will take them seriously?

While platforms like X attempt to normalize or minimize the harm of deepfake nudification, educators must push back against the idea that this behavior is accessible, acceptable, or consequence-free. That message does not stay online. It reaches classrooms, school buses, and lunch tables. When perpetrators face little accountability and victims are punished for reacting, the lesson students learn is devastatingly clear.

If tech leaders will not fully account for the damage their products enable, schools must act—not through harsher punishment, but through education. Teaching AI literacy, consent, and respect is our strongest defense against a problem that is only growing. Prevention, not discipline, is how we protect children—and how we ensure no more students have to fight back just to be heard.


Julienne Louis-Anderson is a former educator, curriculum writer, and educational equity advocate. She is also a Public Voices Fellow of The OpEd Project in Partnership with the National Black Child Development Institute


Read More

Man lying in his bed, on his phone at night.

As the 2026 election approaches, doomscrolling and social media are shaping voter behavior through fear and anxiety. Learn how digital news consumption influences political decisions—and how to break the cycle for more informed voting.

Getty Images, gorodenkoff

Americans Are Doomscrolling Their Way to the Ballot Box and Only Getting Empty Promises

As the spring primary cycle ramps up, voters are deciding which candidates to elect in the November general election, but too much doomscrolling on social media is leading to uninformed — and often anxiety-based — voting. Even though online platforms and politicians may be preying on our exhaustion to further their agendas, we don’t have to fall for it this election cycle.

Doomscrolling is, unfortunately, part of daily life for many of us. It involves consuming a virtually endless amount of negative social media posts and news content, causing us to feel scared and depressed. Our brains have a hardwired negativity bias that causes us to notice potential threats and focus on them. This is exacerbated by the fact that people who closely follow or participate in politics are more likely to doomscroll.

Keep ReadingShow less
The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less