Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

Opinion

Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.


Attorneys for the plaintiffs in the federal lawsuit filed against xAI this month argue the company "saw a business opportunity: an opportunity to profit off the sexual predation of real people, including children," according to Rolling Stone. The outrage is global. France reported Grok to prosecutors. Malaysia and Indonesia blocked the chatbot. Brazil demanded X to remove deepfake content, and the UK is taking legislative action to criminalize the creation of sexually explicit, nonconsensual content.

The Cat-and-Mouse Problem

"Nudification" apps, tools that strip clothing from photos to generate realistic nude images, have existed in corners of the internet for years with little consequence. But in 2024 and 2025, when major AI platforms including xAI updated their tools in ways that made the capability accessible to almost anyone, what had been a niche problem spread rapidly into schools and communities across the country. Victims, unfortunately, are left with little recourse for a couple reasons.

First, if regulations or legislation starts to infringe on users, they just hop to a different platform. This pattern is not unfamiliar to security practitioners, and the takedown of the Hydra Market in 2022 serves as a classic example. A dark web marketplace is seized and almost overnight a new one emerges in its place. The threat isn’t eliminated. It relocates.

It’s also difficult to trace the true origin of deepfake images and videos. Even though forensic tools exist, they lack the sophistication needed to be truly helpful with deepfake video investigation, according to ethical hacker, FC aka FREAKYCLOWN. “Whilst there may be a digital trail to follow with some deepfakes, attribution is always going to be challenging.”

There’s also a cost factor, which could be prohibitively expensive for many of the teenage victims. Given that there is no guarantee that a forensic investigation will yield the results they need, the best place for victims to start is trying to find where it originated. “Signatures, like which software generated it, which software distributed it, and any possible metadata embedded in the file as well as digital fingerprints for certain platforms will be apparent,” FC said, “But in many cases the person behind the video may have a level of anonymity that could be impossible to unpick.

The Law Is Catching Up, But Not Fast Enough

Pennsylvania amended its criminal code to specifically classify AI-generated child sexual abuse material as a third-degree felony. Legislators in multiple other states are pursuing similar measures. The recently signed Take It Down Act is a step in the right direction, but legal experts caution that legislation takes time, and teenagers remain acutely vulnerable while lawmakers work to catch up. Since deepfake technologies became publicly accessible almost a decade ago, states have been passing legislation to protect victims, but it wasn’t until 2025 and early 2026 that the federal Take It Down Act became law. Still, states are looking to hold not only perpetrators but also AI platforms accountable.

What Schools Must Do Now

Schools need clear protocols so that when students report these incidents, administrators escalate them rather than bury them. "Parents, educators, workers, and policymakers are now asking sharper questions about accountability, fairness, and safety. We still have time to shape how these systems enter public life," J.B. Branch, Attorney and Policy Counsel, Public Citizen said.

As a first step, schools can follow the lead of Lynnbrook High School in San Jose, CA where the board of trustees unanimously approved updates to the district’s bullying policy, Board Policy 5131.2, to now include protections for cyberbullying both on and off campus.

A Reason for Cautious Hope

Effective change mandates strong leadership, though. Unfortunately, nearly seven months passed between the known case of Grok being misused and restrictions being put in place. Even then, the restrictions were only for non-paying subscribers and they were paired with Musk denying he had any knowledge of Grok creating sexual underage images.

This lawsuit is an opportunity to establish something the cybersecurity industry has long understood about accountability: if you build the tool that made this possible, you bear responsibility for what it did. AI companies need to implement structural safeguards before deploying tools capable of generating explicit content, not wait until a public outcry or class-action lawsuit forces their hand.

AI companies need to be held accountable. They built the tools and now a lawsuit is asking one to own the damage. The real question is not only will they be held accountable but whether this will also be the moment that changes the calculus permanently.

Kacy Zurkus is a freelance writer whose work has been published in Next Avenue, Dark Reading, and Security Boulevard. She's also Director of Content for RSAC, a cybersecurity company.


Read More

The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less