Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Humanoid Educators Will Widen Inequality—And Only Tech Overlords Will Benefit

Opinion

Humanoid Educators Will Widen Inequality—And Only Tech Overlords Will Benefit
a sign with a question mark and a question mark drawn on it
Photo by Nahrizul Kadri on Unsplash

In March, First Lady Melania Trump hosted an AI-powered humanoid robot at the White House during the Fostering the Future Together Global Coalition Summit, and introduced Plato, a humanoid educator marketed as a replacement for teachers that could homeschool children. A humanoid educator that speaks multiple languages, is always available, and draws on a vast store of information could expand access in meaningful ways. But the evidence suggests that the risks outweigh the benefits, that adoption will be uneven, and that the families most likely to adopt Plato will bear those risks disproportionately.

Research on excessive technology use in childhood has found consistent results. Young children and teenagers who spend too much time with screens are more likely to experience reduced physical activity, lower attention spans, depression, and social anxiety. On the same day that Melania Trump introduced Plato, a California jury ruled that Meta and YouTube contributed to anxiety and depression in a woman who began using social media at age 6, a reminder that the consequences of under-tested technology on children can be severe and long-lasting.


The concern with technology use goes beyond screen time. A 2024 study by researchers at MIT's Media Lab found that reliance on generative AI chatbots reduces learning retention and critical thinking, the very outcomes Melania Trump claimed Plato would foster. Child development research has shown that for children to develop resilience and independent reasoning, they need to experience difficulty, struggle, and sometimes fail. A robot that is, by design, always patient and always available interrupts this learning process. Far from producing "deep critical thinking," an educator who never challenges a child to sit with confusion is likely to produce the opposite.

Traditional schooling provides something no information-rich robot can replicate. It offers structured opportunities to practice teamwork, communication, and conflict resolution alongside peers and imperfect adults—skills that are crucial to becoming a productive adult.

The broader impact of introducing humanoid educators is even more alarming. A shift to humanoid teachers will reorganize existing educational inequalities in the U.S. The exploding costs of childcare and K-12 education already strain families across the country. For middle- and low-income families searching for solutions, a humanoid educator backed by the First Lady may seem like a viable path forward. In reality, these families and their children would unknowingly become unpaid beta testers for an unproven product, one that the developers themselves may not use on their own kids. Several tech company CEOs, including Mark Zuckerberg, Bill Gates, Steve Jobs, and Evan Spiegel, have spoken openly about limiting their children’s screen time. Even Elon Musk, who admitted he set no such limits, has since said he regrets it, noting that algorithms had reshaped how his children think.

The resulting landscape would look like this: a competent human teacher becomes a luxury service accessible only to the ultra-rich; middle-class families turn to humanoid robots for some semblance of pedagogy; and low-income families continue to face the compounding disadvantages of under-resourced schools. The gap widens, and it just wears a new face.

Supporters of humanoid educators may argue that they are tackling a shortage of qualified teachers. But this framing misses the actual problem. The United States does not have a shortage of qualified teachers; it has a shortage of funding for them. Redirecting resources toward humanoid robots does not solve that problem; it deepens it. Every dollar invested in Plato is a dollar not spent on teacher salaries, classroom resources, or the school infrastructure that students in under-resourced communities desperately need. Framing a funding failure as a supply problem and then selling a technology product as the fix masks who benefits and who pays the price.

There is also a safety dimension that cannot be glossed over. There have been documented instances of generative AI chatbots encouraging teenagers to self-harm. Deploying an untested humanoid robot, one that does not understand ethics, context, or the emotional vulnerability of a child, as a primary educator introduces serious and poorly understood risks. Before Plato or any similar product is rolled out at scale, those risks need to be studied rigorously, not discovered after the fact. Children deserve better than to be the experiment.

Congress should mandate multi-stakeholder research into humanoid educators, funded by the AI companies developing these products and conducted in genuine collaboration with K-12 educators, higher education researchers, child development specialists, and the families who would be most affected. The research should assess impacts on learning retention, critical thinking, social development, and child safety, with findings made public before any large-scale deployment.

Dr. Erezi Ogbo-Gebhardt is an assistant professor of information science at North Carolina Central University’s School of Library and Information Sciences, and a Public Voices Fellow on technology in the public interest with The OpEd Project.


Read More

An illustration of a block with the words, "AI," on it, surrounded by slightly smaller caution signs.

The future of AI should be measured by its impact on ordinary Americans—not just tech executives and investors. Exploring AI inequality, labor concerns, and responsible innovation.

Getty Images, J Studios

The Kayla Test: Exploring How AI Impacts Everyday Americans

We’re failing the Kayla Test and running out of time to pass it. Whether AI goes “well” for the country is not a question anyone in SF or DC can answer. To assess whether AI is truly advancing the interests of Americans, AI stakeholders must engage with more than power users, tokenmaxxers, and Fortune 500 CEOs. A better evaluation is to talk to folks like Kayla, my Lyft driver in Morgantown, WV, and find out what they think about AI. It's a test I stumbled upon while traveling from an AI event at the West Virginia University College of Law to one at Stanford Law.

Kayla asked me what I do for a living. I told her that I’m a law professor focused on AI policy. Those were the last words I said for the remainder of the ride to the airport.

Keep Reading Show less
Close up of a person on their phone at night.

From “Patriot Games” to The Hunger Games, how spectacle, social media, and political culture risk normalizing violence and eroding empathy.

Getty Images, Westend61

The Capitol Is Counting on Us to Laugh

When the Trump administration announced the Patriot Games, many people laughed. Selecting two children per state for a nationally televised sports competition looked too much like Suzanne Collins’ Hunger Games to take seriously. But that instinct, to laugh rather than look closer, is one the Capitol is counting on. It has always been easier to normalize violence when it arrives dressed as entertainment or patriotism.

Here’s what I mean: The Hunger Games starts with the reaping, the moment when a Capitol official selects two children, one boy and one girl, to fight to the death against tributes from every other district. The games were created as an annual reminder of a failed rebellion, to remind the districts that dissent has consequences. At first, many Capitol residents saw the games as a just punishment. But sentiments shifted as the spectacle grew—when citizens could bet on winners, when a death march transformed into a beauty pageant, when murder became a pathway to celebrity.

Keep Reading Show less
Technology and Presidential Election

Anthropic’s Mythos AI raises alarms about surveillance, deepfakes, and democracy. Why urgent AI regulation is needed as U.S. policy struggles to keep pace.

Getty Images, Douglas Rissing

How the Latest in AI Threatens Democracy

On April 24, America got a wake-up call from Anthropic, one of the nation’s leading artificial intelligence companies. It announced a new AI tool, called Mythos, that can identify flaws in computer networks and software systems that, as Politico puts it, “Even the brightest human minds have been unable to identify.”

A machine smarter than the “brightest human minds” sounds like a line from a dystopian science fiction movie. And if that weren’t scary enough, we now have a government populated by people who seem oblivious to the risks AI poses to democracy and humanity itself.

Keep Reading Show less
Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate
the letters are made up of different colors

Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate

This nonpartisan policy brief, written by an ACE fellow, is republished by The Fulcrum as part of our partnership with the Alliance for Civic Engagement and our NextGen initiative — elevating student voices, strengthening civic education, and helping readers better understand democracy and public policy.

Key takeaways

  • The U.S. has no national AI liability law. Instead, a patchwork of state laws has emerged which has resulted in legal protections being dependent on where an individual resides.
  • It’s often unclear who is legally responsible when AI causes harm. This gap leaves many people with no clear path to seek help.
  • In March 2026, the White House and Congress introduced major proposals to establish a federal standard, but there is significant disagreement about whether that standard should prioritize protecting innovation or protecting people harmed by AI systems.

Background: A Patchwork of State Laws

Without a national AI law, states have been filling in the gaps on their own. The result is an uneven landscape where a person’s legal protections depend entirely on which state they live in.

Keep Reading Show less