Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The New Talk: The Need To Discuss AI With Kids

Opinion

A child looking at a cellphone at night.

AI is changing childhood. Kevin Frazier explains why it's critical for parents and mentors to start having the “AI talk” and teach kids safe, responsible AI use.

Getty Images, Elva Etienne

“[I]t is a massively more powerful and scary thing than I knew about.” That’s how Adam Raine’s dad characterized ChatGPT when he reviewed his son’s conversations with the AI tool. Adam tragically died by suicide. His parents are now suing OpenAI and Sam Altman, the company’s CEO, based on allegations that the tool contributed to his death.

This tragic story has rightfully caused a push for tech companies to institute changes and for lawmakers to institute sweeping regulations. While both of those strategies have some merit, computer code and AI-related laws will not address the underlying issue: our kids need guidance from their parents, educators, and mentors about how and when to use AI.


I don’t have kids. I’m fortunate to be an uncle to two kiddos and to be involved in the lives of my friends’ youngsters. However, I do have first-hand experience with childhood depression and anorexia. Although that was in the pre-social media days and well before the time of GPTs, I’m confident that what saved me then will go a long way toward helping kids today avoid or navigate the negative side effects that can result from excessive use of AI companions.

Kids increasingly have access to AI tools that mirror key human characteristics. The models seemingly listen, empathize, joke, and, at times, bully, coerce, and manipulate. It’s these latter attributes that have led to horrendous and unacceptable outcomes. As AI becomes more commonly available and ever more sophisticated, the ease with which users of all ages may come to rely on AI for sensitive matters will only increase.

Major AI labs are aware of these concerns. Following the tragic loss of Raine, OpenAI has announced several changes to its products and processes to more quickly identify and address users seemingly in need of additional support. Notably, these interventions come with a cost. Altman made clear that the prioritization of teen safety would necessarily involve reduced privacy. The company plans to track user behavior to estimate their age. If a user is flagged as a minor, they will be subject to various checks on how they use the product, including limitations on late-night use, notification of family or emergency services in the wake of messages suggestive of immediate self-harm, and limitations on the responses they will receive when the model is prompted on sexual or self-harm topics.

Legislators, too, are tracking this emerging risk to teen well-being. California is poised to pass AB 1064, a bill imposing manifold requirements on all operators of AI companions. Among several other requirements, this bill would direct operators to prioritize factually accurate answers to prompts over the users’ beliefs or preferences. It would also prevent operators from deploying AI companions with a foreseeable risk of encouraging troubling behavior, such as disordered eating. These mandates, which sound somewhat feasible and defensible on paper, may have unintended consequences in practice.

Consider, for example, whether operators worried about encouraging disordered eating among teens will ask all users to regularly certify whether they have had concerns about their weight or diet in the last week. These and other invasive questions may shield operators from liability but carry a grave risk of exacerbating a user’s mental well-being. Speaking from experience, reminders of your condition can often make things much worse—sending you further down a cycle of self-doubt.

The upshot is that technical solutions or legal interventions will not ultimately be the thing that helps our kids make full use of the numerous benefits of AI while also steering clear of its worst traits. It’s time to normalize a new “talk.” Just as parents and trusted mentors have long played a critical role in steering their kids through the sensitive topic of sex, they can serve as an important source of information on the responsible use of AI tools.

Kids need to have someone in their lives they can openly share their AI questions with. They need to be able to disclose troubling chats to someone without fear of being shamed or punished. They need to have a reliable and knowledgeable source of information on how and why AI works. Absent this sort of AI mentorship, we are effectively putting our kids into the driver’s seat of the most powerful technological tool without even having taken a written exam on the rules of the road.

My niece and nephew are well short of the age of needing the “AI talk.” If asked to give it, I’d be happy to do so. I spend my waking hours researching AI, talking to AI experts, and studying related areas of the law. I’m ready and willing to serve as their AI go-to.

We—educators, legislators, and AI companies—need to help other parents and mentors prepare for a similar conversation. This doesn’t mean training parents to become AI savants, but it does mean assisting parents find courses and resources that are accessible and accurate. From basic FAQs that walk parents through the “AI talk” to community events that invite parents to come learn about AI, there’s tried-and-true strategies to ready parents for this pivotal and ongoing conversation.

Parents surely don’t need another thing added to their extensive and burdensome responsibilities, but this is a talk we cannot avoid. The AI labs are steered more by profit than child well-being. Lawmakers are not well-known for crafting nuanced tech policy. We cannot count exclusively on tech fixes and new laws to tackle the social and cultural ramifications of AI use. This is one of those things that can and must involve family and community discourse.

Love, support, and, to be honest, distractions from my parents, my coaches, and friends were the biggest boost to my own recovery. And while we should surely hold AI labs accountable and spur our lawmakers to impose sensible regulations, we should also develop the AI literacy required to help our youngsters learn the pros and cons of AI tools.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

An illustration of a block with the words, "AI," on it, surrounded by slightly smaller caution signs.

The future of AI should be measured by its impact on ordinary Americans—not just tech executives and investors. Exploring AI inequality, labor concerns, and responsible innovation.

Getty Images, J Studios

The Kayla Test: Exploring How AI Impacts Everyday Americans

We’re failing the Kayla Test and running out of time to pass it. Whether AI goes “well” for the country is not a question anyone in SF or DC can answer. To assess whether AI is truly advancing the interests of Americans, AI stakeholders must engage with more than power users, tokenmaxxers, and Fortune 500 CEOs. A better evaluation is to talk to folks like Kayla, my Lyft driver in Morgantown, WV, and find out what they think about AI. It's a test I stumbled upon while traveling from an AI event at the West Virginia University College of Law to one at Stanford Law.

Kayla asked me what I do for a living. I told her that I’m a law professor focused on AI policy. Those were the last words I said for the remainder of the ride to the airport.

Keep ReadingShow less
Close up of a person on their phone at night.

From “Patriot Games” to The Hunger Games, how spectacle, social media, and political culture risk normalizing violence and eroding empathy.

Getty Images, Westend61

The Capitol Is Counting on Us to Laugh

When the Trump administration announced the Patriot Games, many people laughed. Selecting two children per state for a nationally televised sports competition looked too much like Suzanne Collins’ Hunger Games to take seriously. But that instinct, to laugh rather than look closer, is one the Capitol is counting on. It has always been easier to normalize violence when it arrives dressed as entertainment or patriotism.

Here’s what I mean: The Hunger Games starts with the reaping, the moment when a Capitol official selects two children, one boy and one girl, to fight to the death against tributes from every other district. The games were created as an annual reminder of a failed rebellion, to remind the districts that dissent has consequences. At first, many Capitol residents saw the games as a just punishment. But sentiments shifted as the spectacle grew—when citizens could bet on winners, when a death march transformed into a beauty pageant, when murder became a pathway to celebrity.

Keep ReadingShow less
Technology and Presidential Election

Anthropic’s Mythos AI raises alarms about surveillance, deepfakes, and democracy. Why urgent AI regulation is needed as U.S. policy struggles to keep pace.

Getty Images, Douglas Rissing

How the Latest in AI Threatens Democracy

On April 24, America got a wake-up call from Anthropic, one of the nation’s leading artificial intelligence companies. It announced a new AI tool, called Mythos, that can identify flaws in computer networks and software systems that, as Politico puts it, “Even the brightest human minds have been unable to identify.”

A machine smarter than the “brightest human minds” sounds like a line from a dystopian science fiction movie. And if that weren’t scary enough, we now have a government populated by people who seem oblivious to the risks AI poses to democracy and humanity itself.

Keep ReadingShow less
Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate
the letters are made up of different colors

Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate

This nonpartisan policy brief, written by an ACE fellow, is republished by The Fulcrum as part of our partnership with the Alliance for Civic Engagement and our NextGen initiative — elevating student voices, strengthening civic education, and helping readers better understand democracy and public policy.

Key takeaways

  • The U.S. has no national AI liability law. Instead, a patchwork of state laws has emerged which has resulted in legal protections being dependent on where an individual resides.
  • It’s often unclear who is legally responsible when AI causes harm. This gap leaves many people with no clear path to seek help.
  • In March 2026, the White House and Congress introduced major proposals to establish a federal standard, but there is significant disagreement about whether that standard should prioritize protecting innovation or protecting people harmed by AI systems.

Background: A Patchwork of State Laws

Without a national AI law, states have been filling in the gaps on their own. The result is an uneven landscape where a person’s legal protections depend entirely on which state they live in.

Keep ReadingShow less