I put my 100% original paper into an AI checker, only to receive a stunning 38% AI review. My heart raced, but my brain reasoned that I had not used AI at all on this assignment. But how would I prove that? I quickly began swapping words for less intelligent synonyms to make the writing sound less “academic.” After getting the percentage down to 19, I stopped. I was stripping away the quality of my writing to ensure I did not trigger AI detectors, when I hadn’t even done anything wrong on the assignment.
The rule at my school—Riverdale Country School, one of the most prestigious private schools in New York State—has been made clear by the dean. He said: “If we catch you using artificial intelligence to enhance your writing, you will immediately be sent to the judicial committee.” If he wanted to terrify the student body, it worked.
The culture of fear around AI was forcing me to lower my performance because at so many schools, AI is conceived of as only a tool for cheating, not one for learning. That’s a backwards way to approach technology poised to change our lives.
Especially at Riverdale, we have the resources, technology, and qualified teachers to teach students about AI. Yet the school still largely avoids discussing AI, out of fear that students will use it to cheat.
The result is what I did: most students run work—products of their own labor and intelligence—through AI checkers, fearing the threatened consequences for anyone suspected of using it, even when they did all the work. It’s a perverse form of thought policing that is thwarting education.
And my situation is far from unique. Across the country, students report self-censoring their own vocabulary, avoiding sophisticated sentence structures, and deliberately writing below their ability level—not to cheat, but to avoid being accused of it. A tool that was supposed to catch dishonesty is instead punishing students for writing too well.
The chilling effect extends beyond individual assignments. Many students now avoid using AI for any purpose—including entirely legitimate ones, like asking it to explain a confusing math concept or brainstorm ideas before writing a first draft—because the line between "cheating" and "learning" has never been clearly drawn by their schools. Without guidance, students default to avoidance. They're left to navigate a powerful technology entirely on their own, with no framework for using it responsibly.
Teachers, too, are caught in the crossfire. Some educators who want to thoughtfully incorporate AI into their classrooms have been quietly discouraged by administrators who are worried about setting a permissive precedent. A history teacher who asks students to critically evaluate an AI-generated essay, or a science teacher who has students use AI to model data, risks being seen as enabling cheating rather than modeling responsible use. The institutional fear doesn't just silence students—it silences the adults who might otherwise be their guides.
Some schools get it. For $65,000 in annual tuition, New York’s Alpha School offers an educational model built entirely around AI. Alpha opened last September, and its students use personalized AI tutors to complete all their academics in a two-hour block during the morning. The program reports that its students “grow academically more than twice as fast as the national average.”
Some more traditional schools are easing into AI. Phillips Exeter Academy, a private boarding school in New Hampshire, hosts alumni panels on the benefits and risks of AI. Additionally, they offer their faculty a course on AI literacy.
Meanwhile, kids at public schools spend six hours a day in traditional classrooms that block access to the same AI-powered tools that kids at private schools like Alpha and Exeter capitalize on to strengthen their learning. School districts in New York City and Los Angeles initially blocked AI on school networks out of concern for cheating and safety. Some districts have reversed these bans, but confusion still reigns.
A study done by EdWeek Research Center reports that, as of February 2024, 79% of public school educators say their districts don’t have clear policies on AI, leaving their schools in a gray area on AI use. Despite these inconsistencies, one rule is clear and consistent: one in five educators have districts that prohibit its students from using generative AI, like ChatGPT.
The message to students stands: AI is something to avoid, not something to understand or master, in a world where doing so has become a critical skill for collegiate and career success.
The growing gap between private schools offering AI integration and public schools struggling to adapt threatens to widen the gap that our education system should be working to fix.
This gap is a vestige of longstanding inequities in teaching students about technology. Nationwide, 60% of public high schools offered computer science classes during the 2024-25 school year, with only 6.1% of students enrolling in these classes. This gap only deepens racially, as just 34% of schools serving predominantly Black, Indigenous, Latino, and Pacific Islander students offered computer science courses compared to 52% of schools serving mostly white and Asian students as of 2021. If schools lack basic computer science education, they do not stand a chance of encouraging AI literacy.
Meanwhile, students at schools like Exeter and Alpha use AI daily. This divide is not just about AI; it's about which students will be equipped to handle the heightening demands of a workforce shaped around AI literacy, rapidly becoming an essential career skill, as necessary as critical thinking or leadership. Doctors use AI for diagnoses, lawyers for legal research, and teachers to create lesson plans.
Meanwhile, students who were taught to fear AI or never got the chance to learn about it will enter college and the workforce already behind, lacking skills that employers increasingly expect as a baseline.
Some argue that AI is constantly evolving, so quickly that it cannot be properly taught before it changes again. That’s possible but lessons should go beyond phrasing of prompts or the ethics of deep fake videos; the point of embracing AI is to teach students how to adapt to advances in the technology, just as educators agree that teaching history is essential (even though it is constantly developing) because it still teaches critical thinking and context awareness. The same goes for AI.
AI offers more than a chance at deception and corner-cutting, but too many schools aren’t acknowledging that. It’s a shame I had to learn that lesson on my own as I fretted about a possible false allegation of cheating on a paper and dumbed down my own work to avoid it.
Surya Das is a sophomore at Riverdale Country Day School in New York City with a passion for computer science.




















