Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Docuseries Highlights Need for Legal Protections for Kid Influencers

News

Docuseries Highlights Need for Legal Protections for Kid Influencers

child holding smartphone

Getty Images/Keiko Iwabuchi

A new Netflix docuseries explores the unseen complexities and dark possibilities of child influencing in our modern internet age, raising urgent questions and highlighting the critical need for legal protections for kid influencers once their internet presence turns into work—a full-time job that, at times, financially supports their families.

Released last week, “ Bad Influence: The Dark Side of Kidfluencing ” shares how Youtube star Piper Rockelle—who began posting videos at eight years old and garnered 12 million subscribers and about 1.87 billion views—and her “Squad” of fellow pre-teen social media influencers worked and lived in a toxic environment under Rockelle's "momager", Tiffany Smith, and Smith's boyfriend, Hunter Hill.


The three-part exposé dives into the harsh, manipulative, and complex working conditions that “Squad” members experienced while working with Smith and Hill, who created a physically, mentally, and emotionally unsafe environment for the underage content creators.

In 2022, eleven former “Squad” members filed a complaint against Smith and Hill for “emotional, verbal, physical, and, at times, sexual abuse” when they were active members of the Squad. The child abuse lawsuit was settled in October 2024 for $1.85 million—incredibly short of the $22 million that was originally sought—with all parties specifically disclaiming any liability.

All former “Squad” members who have spoken out are still intensely impacted by the trauma caused by Smith and Hunter, whether their online careers have been irreparably damaged and/or they are experiencing long-term post-traumatic stress. Attorney Matt Sarelson shared in the documentary that, “In many ways, a lawsuit is where justice goes to die.”

The viral series explains how managers of influencers have been able to circumvent child labor laws and protections put in place for children in the entertainment industry.

“These abuse allegations against Tiffany, which include battery and child labor violations, are not unique to the Piper Rockelle/Tiffany case,” Lorenz said in the series. “These are common forms of abuse that are rampant in the ‘kidfluencer’ industry.

Several culture experts have criticized the lack of connection between many political figures and pop culture, emphasizing the importance of understanding pop culture and acknowledging its significant impact on individuals and groups.

“‘Kidfluencing’ right now is the wild, wild west. I mean, there’s no regulations that keep these influencers safe,” said Brandon Stewart, Content Strategist, CEO of Brandon Studios

“It’s an unregulated frontier of the entertainment industry,” shared Attorney Jeremiah D. Graham. “When a child is treated like this, they shouldn’t have to go out and hire private attorneys in order to vindicate their rights.”

“The government has absolutely no appetite to implement any sort of meaningful regulations in this industry. They still treat this industry as a joke,” said Lorenz. “Lawmakers are often 70 to 80 years old. They don’t take this world seriously at all. They make fun of it. They mock it…And until we start taking this industry seriously until we start viewing influencing as labor, these kids are screwed.”

Legal Protections for Child and Teenage Influencers

Quit Clicking Kids, founded by Chris McCarty, who was featured in the docuseries, advocates for legislation that protects the well-being of child influencers. The initiative looks to expand protections for child actors to child influencers.

In 2022, McCarty worked with Washington State Rep. Emily Wicks (D) to craft and introduce HB 2023. The bill would require guardians to set aside a percentage of social media earnings for children featured in the content and, once they reach the age of 18, allow former child influencers to request the removal of content in which they appear. In 2023, the bill was reintroduced as HB 1627 by Washington State Rep. Kristine Reeves (D) with no changes.

“I think one of the biggest misconceptions is not seeing it as work, especially for the kids,” commented McCarty. “It is very much not a hobby for many of these influencers. It is a job. And in some cases, it’s the primary or even the only source of income for these families. That has the potential to place an undue burden on these children to create content.”

In 2023, Governor J.B. Pritzker (D) signed SB 1782 into law, making Illinois the first state to implement financial protections for child influencers.

In 2024, California Gov. Gavin Newsom (D) signed two bills that protect child and teenage influencers from financial abuse:

AB 1880 expanded the Coogan Law —which requires employers of child performers and creators to save at least 15% of their gross earnings in a trust, accessible once the child reaches adulthood—to also financially protect underaged content creators.

SB 764 requires that parents or guardians of minors featured in monetized online content set aside a percentage of their earnings in trust accounts.

Despite the growing call for legal protections, the pressing question remains:

How can we increase safety regulations to protect the entire well-being, not just the financial well-being, of child and teenage influencers?

Currently, advocates call for educating audiences and reforming internet culture to be more skeptical about child-centered content and to be more concerned for the well-being of children featured in monetized content. Others point to social media platforms, stating that it is their responsibility to rethink their business models and prioritize the safety of children.

“We really just need to educate people. We need to change the culture. We need to change norms around parenting,” stated Lorenz. “The fundamental problem is the business model of these platforms and these capitalist incentives.”

Belén Dumont is a freelance reporter and associate editor at The Fulcrum.


Read More

Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less
AI - Its Use, Misuse, and Regulation
Glowing ai chip on a circuit board.
Photo by Immo Wegmann on Unsplash

AI - Its Use, Misuse, and Regulation

There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.

First, the good uses. There is no question that AI can outthink human beings, regardless of how famous or knowledgeable, because of the amount of information it can process in a short amount of time. The most powerful accounts I've read have been in the field of medical research: doctors have fed facts into AI, asking for a diagnosis or a possible remedy, and AI has come up with remarkable answers beyond the human mind's capability.

Keep ReadingShow less