In March, First Lady Melania Trump hosted an AI-powered humanoid robot at the White House during the Fostering the Future Together Global Coalition Summit, and introduced Plato, a humanoid educator marketed as a replacement for teachers that could homeschool children. A humanoid educator that speaks multiple languages, is always available, and draws on a vast store of information could expand access in meaningful ways. But the evidence suggests that the risks outweigh the benefits, that adoption will be uneven, and that the families most likely to adopt Plato will bear those risks disproportionately.
Research on excessive technology use in childhood has found consistent results. Young children and teenagers who spend too much time with screens are more likely to experience reduced physical activity, lower attention spans, depression, and social anxiety. On the same day that Melania Trump introduced Plato, a California jury ruled that Meta and YouTube contributed to anxiety and depression in a woman who began using social media at age 6, a reminder that the consequences of under-tested technology on children can be severe and long-lasting.
The concern with technology use goes beyond screen time. A 2024 study by researchers at MIT's Media Lab found that reliance on generative AI chatbots reduces learning retention and critical thinking, the very outcomes Melania Trump claimed Plato would foster. Child development research has shown that for children to develop resilience and independent reasoning, they need to experience difficulty, struggle, and sometimes fail. A robot that is, by design, always patient and always available interrupts this learning process. Far from producing "deep critical thinking," an educator who never challenges a child to sit with confusion is likely to produce the opposite.
Traditional schooling provides something no information-rich robot can replicate. It offers structured opportunities to practice teamwork, communication, and conflict resolution alongside peers and imperfect adults—skills that are crucial to becoming a productive adult.
The broader impact of introducing humanoid educators is even more alarming. A shift to humanoid teachers will reorganize existing educational inequalities in the U.S. The exploding costs of childcare and K-12 education already strain families across the country. For middle- and low-income families searching for solutions, a humanoid educator backed by the First Lady may seem like a viable path forward. In reality, these families and their children would unknowingly become unpaid beta testers for an unproven product, one that the developers themselves may not use on their own kids. Several tech company CEOs, including Mark Zuckerberg, Bill Gates, Steve Jobs, and Evan Spiegel, have spoken openly about limiting their children’s screen time. Even Elon Musk, who admitted he set no such limits, has since said he regrets it, noting that algorithms had reshaped how his children think.
The resulting landscape would look like this: a competent human teacher becomes a luxury service accessible only to the ultra-rich; middle-class families turn to humanoid robots for some semblance of pedagogy; and low-income families continue to face the compounding disadvantages of under-resourced schools. The gap widens, and it just wears a new face.
Supporters of humanoid educators may argue that they are tackling a shortage of qualified teachers. But this framing misses the actual problem. The United States does not have a shortage of qualified teachers; it has a shortage of funding for them. Redirecting resources toward humanoid robots does not solve that problem; it deepens it. Every dollar invested in Plato is a dollar not spent on teacher salaries, classroom resources, or the school infrastructure that students in under-resourced communities desperately need. Framing a funding failure as a supply problem and then selling a technology product as the fix masks who benefits and who pays the price.
There is also a safety dimension that cannot be glossed over. There have been documented instances of generative AI chatbots encouraging teenagers to self-harm. Deploying an untested humanoid robot, one that does not understand ethics, context, or the emotional vulnerability of a child, as a primary educator introduces serious and poorly understood risks. Before Plato or any similar product is rolled out at scale, those risks need to be studied rigorously, not discovered after the fact. Children deserve better than to be the experiment.
Congress should mandate multi-stakeholder research into humanoid educators, funded by the AI companies developing these products and conducted in genuine collaboration with K-12 educators, higher education researchers, child development specialists, and the families who would be most affected. The research should assess impacts on learning retention, critical thinking, social development, and child safety, with findings made public before any large-scale deployment.
Dr. Erezi Ogbo-Gebhardt is an assistant professor of information science at North Carolina Central University’s School of Library and Information Sciences, and a Public Voices Fellow on technology in the public interest with The OpEd Project.



















