Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The New Talk: The Need To Discuss AI With Kids

Opinion

A child looking at a cellphone at night.

AI is changing childhood. Kevin Frazier explains why it's critical for parents and mentors to start having the “AI talk” and teach kids safe, responsible AI use.

Getty Images, Elva Etienne

“[I]t is a massively more powerful and scary thing than I knew about.” That’s how Adam Raine’s dad characterized ChatGPT when he reviewed his son’s conversations with the AI tool. Adam tragically died by suicide. His parents are now suing OpenAI and Sam Altman, the company’s CEO, based on allegations that the tool contributed to his death.

This tragic story has rightfully caused a push for tech companies to institute changes and for lawmakers to institute sweeping regulations. While both of those strategies have some merit, computer code and AI-related laws will not address the underlying issue: our kids need guidance from their parents, educators, and mentors about how and when to use AI.


I don’t have kids. I’m fortunate to be an uncle to two kiddos and to be involved in the lives of my friends’ youngsters. However, I do have first-hand experience with childhood depression and anorexia. Although that was in the pre-social media days and well before the time of GPTs, I’m confident that what saved me then will go a long way toward helping kids today avoid or navigate the negative side effects that can result from excessive use of AI companions.

Kids increasingly have access to AI tools that mirror key human characteristics. The models seemingly listen, empathize, joke, and, at times, bully, coerce, and manipulate. It’s these latter attributes that have led to horrendous and unacceptable outcomes. As AI becomes more commonly available and ever more sophisticated, the ease with which users of all ages may come to rely on AI for sensitive matters will only increase.

Major AI labs are aware of these concerns. Following the tragic loss of Raine, OpenAI has announced several changes to its products and processes to more quickly identify and address users seemingly in need of additional support. Notably, these interventions come with a cost. Altman made clear that the prioritization of teen safety would necessarily involve reduced privacy. The company plans to track user behavior to estimate their age. If a user is flagged as a minor, they will be subject to various checks on how they use the product, including limitations on late-night use, notification of family or emergency services in the wake of messages suggestive of immediate self-harm, and limitations on the responses they will receive when the model is prompted on sexual or self-harm topics.

Legislators, too, are tracking this emerging risk to teen well-being. California is poised to pass AB 1064, a bill imposing manifold requirements on all operators of AI companions. Among several other requirements, this bill would direct operators to prioritize factually accurate answers to prompts over the users’ beliefs or preferences. It would also prevent operators from deploying AI companions with a foreseeable risk of encouraging troubling behavior, such as disordered eating. These mandates, which sound somewhat feasible and defensible on paper, may have unintended consequences in practice.

Consider, for example, whether operators worried about encouraging disordered eating among teens will ask all users to regularly certify whether they have had concerns about their weight or diet in the last week. These and other invasive questions may shield operators from liability but carry a grave risk of exacerbating a user’s mental well-being. Speaking from experience, reminders of your condition can often make things much worse—sending you further down a cycle of self-doubt.

The upshot is that technical solutions or legal interventions will not ultimately be the thing that helps our kids make full use of the numerous benefits of AI while also steering clear of its worst traits. It’s time to normalize a new “talk.” Just as parents and trusted mentors have long played a critical role in steering their kids through the sensitive topic of sex, they can serve as an important source of information on the responsible use of AI tools.

Kids need to have someone in their lives they can openly share their AI questions with. They need to be able to disclose troubling chats to someone without fear of being shamed or punished. They need to have a reliable and knowledgeable source of information on how and why AI works. Absent this sort of AI mentorship, we are effectively putting our kids into the driver’s seat of the most powerful technological tool without even having taken a written exam on the rules of the road.

My niece and nephew are well short of the age of needing the “AI talk.” If asked to give it, I’d be happy to do so. I spend my waking hours researching AI, talking to AI experts, and studying related areas of the law. I’m ready and willing to serve as their AI go-to.

We—educators, legislators, and AI companies—need to help other parents and mentors prepare for a similar conversation. This doesn’t mean training parents to become AI savants, but it does mean assisting parents find courses and resources that are accessible and accurate. From basic FAQs that walk parents through the “AI talk” to community events that invite parents to come learn about AI, there’s tried-and-true strategies to ready parents for this pivotal and ongoing conversation.

Parents surely don’t need another thing added to their extensive and burdensome responsibilities, but this is a talk we cannot avoid. The AI labs are steered more by profit than child well-being. Lawmakers are not well-known for crafting nuanced tech policy. We cannot count exclusively on tech fixes and new laws to tackle the social and cultural ramifications of AI use. This is one of those things that can and must involve family and community discourse.

Love, support, and, to be honest, distractions from my parents, my coaches, and friends were the biggest boost to my own recovery. And while we should surely hold AI labs accountable and spur our lawmakers to impose sensible regulations, we should also develop the AI literacy required to help our youngsters learn the pros and cons of AI tools.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

Two people looking at screens.

A case for optimism, risk-taking, and policy experimentation in the age of AI—and why pessimism threatens technological progress.

Getty Images, Andriy Onufriyenko

In Defense of AI Optimism

Society needs people to take risks. Entrepreneurs who bet on themselves create new jobs. Institutions that gamble with new processes find out best to integrate advances into modern life. Regulators who accept potential backlash by launching policy experiments give us a chance to devise laws that are based on evidence, not fear.

The need for risk taking is all the more important when society is presented with new technologies. When new tech arrives on the scene, defense of the status quo is the easier path--individually, institutionally, and societally. We are all predisposed to think that the calamities, ailments, and flaws we experience today--as bad as they may be--are preferable to the unknowns tied to tomorrow.

Keep ReadingShow less
Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less
Meta Undermining Trust but Verify through Paid Links
Facebook launches voting resource tool
Facebook launches voting resource tool

Meta Undermining Trust but Verify through Paid Links

Facebook is testing limits on shared external links, which would become a paid feature through their Meta Verified program, which costs $14.99 per month.

This change solidifies that verification badges are now meaningless signifiers. Yet it wasn’t always so; the verified internet was built to support participation and trust. Beginning with Twitter’s verification program launched in 2009, a checkmark next to a username indicated that an account had been verified to represent a notable person or official account for a business. We could believe that an elected official or a brand name was who they said they were online. When Twitter Blue, and later X Premium, began to support paid blue checkmarks in November of 2022, the visual identification of verification became deceptive. Think Fake Eli Lilly accounts posting about free insulin and impersonation accounts for Elon Musk himself.

This week’s move by Meta echoes changes at Twitter/X, despite the significant evidence that it leaves information quality and user experience in a worse place than before. Despite what Facebook says, all this tells anyone is that you paid.

Keep ReadingShow less