Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The New Talk: The Need To Discuss AI With Kids

Opinion

A child looking at a cellphone at night.

AI is changing childhood. Kevin Frazier explains why it's critical for parents and mentors to start having the “AI talk” and teach kids safe, responsible AI use.

Getty Images, Elva Etienne

“[I]t is a massively more powerful and scary thing than I knew about.” That’s how Adam Raine’s dad characterized ChatGPT when he reviewed his son’s conversations with the AI tool. Adam tragically died by suicide. His parents are now suing OpenAI and Sam Altman, the company’s CEO, based on allegations that the tool contributed to his death.

This tragic story has rightfully caused a push for tech companies to institute changes and for lawmakers to institute sweeping regulations. While both of those strategies have some merit, computer code and AI-related laws will not address the underlying issue: our kids need guidance from their parents, educators, and mentors about how and when to use AI.


I don’t have kids. I’m fortunate to be an uncle to two kiddos and to be involved in the lives of my friends’ youngsters. However, I do have first-hand experience with childhood depression and anorexia. Although that was in the pre-social media days and well before the time of GPTs, I’m confident that what saved me then will go a long way toward helping kids today avoid or navigate the negative side effects that can result from excessive use of AI companions.

Kids increasingly have access to AI tools that mirror key human characteristics. The models seemingly listen, empathize, joke, and, at times, bully, coerce, and manipulate. It’s these latter attributes that have led to horrendous and unacceptable outcomes. As AI becomes more commonly available and ever more sophisticated, the ease with which users of all ages may come to rely on AI for sensitive matters will only increase.

Major AI labs are aware of these concerns. Following the tragic loss of Raine, OpenAI has announced several changes to its products and processes to more quickly identify and address users seemingly in need of additional support. Notably, these interventions come with a cost. Altman made clear that the prioritization of teen safety would necessarily involve reduced privacy. The company plans to track user behavior to estimate their age. If a user is flagged as a minor, they will be subject to various checks on how they use the product, including limitations on late-night use, notification of family or emergency services in the wake of messages suggestive of immediate self-harm, and limitations on the responses they will receive when the model is prompted on sexual or self-harm topics.

Legislators, too, are tracking this emerging risk to teen well-being. California is poised to pass AB 1064, a bill imposing manifold requirements on all operators of AI companions. Among several other requirements, this bill would direct operators to prioritize factually accurate answers to prompts over the users’ beliefs or preferences. It would also prevent operators from deploying AI companions with a foreseeable risk of encouraging troubling behavior, such as disordered eating. These mandates, which sound somewhat feasible and defensible on paper, may have unintended consequences in practice.

Consider, for example, whether operators worried about encouraging disordered eating among teens will ask all users to regularly certify whether they have had concerns about their weight or diet in the last week. These and other invasive questions may shield operators from liability but carry a grave risk of exacerbating a user’s mental well-being. Speaking from experience, reminders of your condition can often make things much worse—sending you further down a cycle of self-doubt.

The upshot is that technical solutions or legal interventions will not ultimately be the thing that helps our kids make full use of the numerous benefits of AI while also steering clear of its worst traits. It’s time to normalize a new “talk.” Just as parents and trusted mentors have long played a critical role in steering their kids through the sensitive topic of sex, they can serve as an important source of information on the responsible use of AI tools.

Kids need to have someone in their lives they can openly share their AI questions with. They need to be able to disclose troubling chats to someone without fear of being shamed or punished. They need to have a reliable and knowledgeable source of information on how and why AI works. Absent this sort of AI mentorship, we are effectively putting our kids into the driver’s seat of the most powerful technological tool without even having taken a written exam on the rules of the road.

My niece and nephew are well short of the age of needing the “AI talk.” If asked to give it, I’d be happy to do so. I spend my waking hours researching AI, talking to AI experts, and studying related areas of the law. I’m ready and willing to serve as their AI go-to.

We—educators, legislators, and AI companies—need to help other parents and mentors prepare for a similar conversation. This doesn’t mean training parents to become AI savants, but it does mean assisting parents find courses and resources that are accessible and accurate. From basic FAQs that walk parents through the “AI talk” to community events that invite parents to come learn about AI, there’s tried-and-true strategies to ready parents for this pivotal and ongoing conversation.

Parents surely don’t need another thing added to their extensive and burdensome responsibilities, but this is a talk we cannot avoid. The AI labs are steered more by profit than child well-being. Lawmakers are not well-known for crafting nuanced tech policy. We cannot count exclusively on tech fixes and new laws to tackle the social and cultural ramifications of AI use. This is one of those things that can and must involve family and community discourse.

Love, support, and, to be honest, distractions from my parents, my coaches, and friends were the biggest boost to my own recovery. And while we should surely hold AI labs accountable and spur our lawmakers to impose sensible regulations, we should also develop the AI literacy required to help our youngsters learn the pros and cons of AI tools.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

Someone wrapping a gift.

As screens replace toys, childhood is being gamified. What this shift means for parents, play, development, and holiday gift-giving.

Getty Images, Oscar Wong

The Christmas When Toys Died: The Playtime Paradigm Shift Retailers Failed to See Coming

Something is changing this Christmas, and parents everywhere are feeling it. Bedrooms overflow with toys no one touches, while tablets steal the spotlight, pulling children as young as five into digital worlds that retailers are slow to recognize. The shift is quiet but unmistakable, and many parents are left wondering what toy purchases even make sense anymore.

Research shows that higher screen time correlates with significantly lower engagement in other play activities, mainly traditional, physical, unstructured play. It suggests screen-based play is displacing classic play with traditional toys. Families are experiencing in real time what experts increasingly describe as the rise of “gamified childhoods.”

Keep ReadingShow less
Affordability Crisis and AI: Kelso’s Universal Capitalism

Rising costs, AI disruption, and inequality revive interest in Louis Kelso’s “universal capitalism” as a market-based answer to the affordability crisis.

Getty Images, J Studios

Affordability Crisis and AI: Kelso’s Universal Capitalism

“Affordability” over the cost of living has been in the news a lot lately. It’s popping up in political campaigns, from the governor’s races in New Jersey and Virginia to the mayor’s races in New York City and Seattle. President Donald Trump calls the term a “hoax” and a “con job” by Democrats, and it’s true that the inflation rate hasn’t increased much since Trump began his second term in January.

But a number of reports show Americans are struggling with high costs for essentials like food, housing, and utilities, leaving many families feeling financially pinched. Total consumer spending over the Black Friday-Thanksgiving weekend buying binge actually increased this year, but a Salesforce study found that’s because prices were about 7% higher than last year’s blitz. Consumers actually bought 2% fewer items at checkout.

Keep ReadingShow less
Censorship Should Be Obsolete by Now. Why Isn’t It?

US Capital with tech background

Greggory DiSalvo/Getty Images

Censorship Should Be Obsolete by Now. Why Isn’t It?

Techies, activists, and academics were in Paris this month to confront the doom scenario of internet shutdowns, developing creative technology and policy solutions to break out of heavily censored environments. The event– SplinterCon– has previously been held globally, from Brussels to Taiwan. I am on the programme committee and delivered a keynote at the inaugural SplinterCon in Montreal on how internet standards must be better designed for censorship circumvention.

Censorship and digital authoritarianism were exposed in dozens of countries in the recently published Freedom on the Net report. For exampl,e Russia has pledged to provide “sovereign AI,” a strategy that will surely extend its network blocks on “a wide array of social media platforms and messaging applications, urging users to adopt government-approved alternatives.” The UK joined Vietnam, China, and a growing number of states requiring “age verification,” the use of government-issued identification cards, to access internet services, which the report calls “a crisis for online anonymity.”

Keep ReadingShow less
The concept of AI hovering among the public.

Panic-driven legislation—from airline safety to AI bans—often backfires, and evidence must guide policy.

Getty Images, J Studios

Beware of Panic Policies

"As far as human nature is concerned, with panic comes irrationality." This simple statement by Professor Steve Calandrillo and Nolan Anderson has profound implications for public policy. When panic is highest, and demand for reactive policy is greatest, that's exactly when we need our lawmakers to resist the temptation to move fast and ban things. Yet, many state legislators are ignoring this advice amid public outcries about the allegedly widespread and destructive uses of AI. Thankfully, Calandrillo and Anderson have identified a few examples of what I'll call "panic policies" that make clear that proposals forged by frenzy tend not to reflect good public policy.

Let's turn first to a proposal in November of 2001 from the American Academy of Pediatrics (AAP). For obvious reasons, airline safety was subject to immense public scrutiny at this time. AAP responded with what may sound like a good idea: require all infants to have their own seat and, by extension, their own seat belt on planes. The existing policy permitted parents to simply put their kid--so long as they were under two--on their lap. Essentially, babies flew for free.

The Federal Aviation Administration (FAA) permitted this based on a pretty simple analysis: the risks to young kids without seatbelts on planes were far less than the risks they would face if they were instead traveling by car. Put differently, if parents faced higher prices to travel by air, then they'd turn to the road as the best way to get from A to B. As we all know (perhaps with the exception of the AAP at the time), airline travel is tremendously safer than travel by car. Nevertheless, the AAP forged ahead with its proposal. In fact, it did so despite admitting that they were unsure of whether the higher risks of mortality of children under two in plane crashes were due to the lack of a seat belt or the fact that they're simply fragile.

Keep ReadingShow less