Skip to content
Search

Latest Stories

Follow Us:
Top Stories

My School’s So Worried I Will Cheat With AI It Isn’t Teaching

Opinion

Highschoolers in a classroom, sitting down at their desks and raising their hands.

A student’s firsthand account reveals how AI detection tools are creating fear, false accusations, and self-censorship in schools. This piece explores flawed AI checkers, unclear education policies, and the growing divide in AI literacy between private and public schools.

Getty Images, Maskot

I put my 100% original paper into an AI checker, only to receive a stunning 38% AI review. My heart raced, but my brain reasoned that I had not used AI at all on this assignment. But how would I prove that? I quickly began swapping words for less intelligent synonyms to make the writing sound less “academic.” After getting the percentage down to 19, I stopped. I was stripping away the quality of my writing to ensure I did not trigger AI detectors, when I hadn’t even done anything wrong on the assignment.

The rule at my school—Riverdale Country School, one of the most prestigious private schools in New York State—has been made clear by the dean. He said: “If we catch you using artificial intelligence to enhance your writing, you will immediately be sent to the judicial committee.” If he wanted to terrify the student body, it worked.


The culture of fear around AI was forcing me to lower my performance because at so many schools, AI is conceived of as only a tool for cheating, not one for learning. That’s a backwards way to approach technology poised to change our lives.

Especially at Riverdale, we have the resources, technology, and qualified teachers to teach students about AI. Yet the school still largely avoids discussing AI, out of fear that students will use it to cheat.

The result is what I did: most students run work—products of their own labor and intelligence—through AI checkers, fearing the threatened consequences for anyone suspected of using it, even when they did all the work. It’s a perverse form of thought policing that is thwarting education.

And my situation is far from unique. Across the country, students report self-censoring their own vocabulary, avoiding sophisticated sentence structures, and deliberately writing below their ability level—not to cheat, but to avoid being accused of it. A tool that was supposed to catch dishonesty is instead punishing students for writing too well.

The chilling effect extends beyond individual assignments. Many students now avoid using AI for any purpose—including entirely legitimate ones, like asking it to explain a confusing math concept or brainstorm ideas before writing a first draft—because the line between "cheating" and "learning" has never been clearly drawn by their schools. Without guidance, students default to avoidance. They're left to navigate a powerful technology entirely on their own, with no framework for using it responsibly.

Teachers, too, are caught in the crossfire. Some educators who want to thoughtfully incorporate AI into their classrooms have been quietly discouraged by administrators who are worried about setting a permissive precedent. A history teacher who asks students to critically evaluate an AI-generated essay, or a science teacher who has students use AI to model data, risks being seen as enabling cheating rather than modeling responsible use. The institutional fear doesn't just silence students—it silences the adults who might otherwise be their guides.

Some schools get it. For $65,000 in annual tuition, New York’s Alpha School offers an educational model built entirely around AI. Alpha opened last September, and its students use personalized AI tutors to complete all their academics in a two-hour block during the morning. The program reports that its students “grow academically more than twice as fast as the national average.

Some more traditional schools are easing into AI. Phillips Exeter Academy, a private boarding school in New Hampshire, hosts alumni panels on the benefits and risks of AI. Additionally, they offer their faculty a course on AI literacy.

Meanwhile, kids at public schools spend six hours a day in traditional classrooms that block access to the same AI-powered tools that kids at private schools like Alpha and Exeter capitalize on to strengthen their learning. School districts in New York City and Los Angeles initially blocked AI on school networks out of concern for cheating and safety. Some districts have reversed these bans, but confusion still reigns.

A study done by EdWeek Research Center reports that, as of February 2024, 79% of public school educators say their districts don’t have clear policies on AI, leaving their schools in a gray area on AI use. Despite these inconsistencies, one rule is clear and consistent: one in five educators have districts that prohibit its students from using generative AI, like ChatGPT.

The message to students stands: AI is something to avoid, not something to understand or master, in a world where doing so has become a critical skill for collegiate and career success.

The growing gap between private schools offering AI integration and public schools struggling to adapt threatens to widen the gap that our education system should be working to fix.

This gap is a vestige of longstanding inequities in teaching students about technology. Nationwide, 60% of public high schools offered computer science classes during the 2024-25 school year, with only 6.1% of students enrolling in these classes. This gap only deepens racially, as just 34% of schools serving predominantly Black, Indigenous, Latino, and Pacific Islander students offered computer science courses compared to 52% of schools serving mostly white and Asian students as of 2021. If schools lack basic computer science education, they do not stand a chance of encouraging AI literacy.

Meanwhile, students at schools like Exeter and Alpha use AI daily. This divide is not just about AI; it's about which students will be equipped to handle the heightening demands of a workforce shaped around AI literacy, rapidly becoming an essential career skill, as necessary as critical thinking or leadership. Doctors use AI for diagnoses, lawyers for legal research, and teachers to create lesson plans.

Meanwhile, students who were taught to fear AI or never got the chance to learn about it will enter college and the workforce already behind, lacking skills that employers increasingly expect as a baseline.

Some argue that AI is constantly evolving, so quickly that it cannot be properly taught before it changes again. That’s possible but lessons should go beyond phrasing of prompts or the ethics of deep fake videos; the point of embracing AI is to teach students how to adapt to advances in the technology, just as educators agree that teaching history is essential (even though it is constantly developing) because it still teaches critical thinking and context awareness. The same goes for AI.

AI offers more than a chance at deception and corner-cutting, but too many schools aren’t acknowledging that. It’s a shame I had to learn that lesson on my own as I fretted about a possible false allegation of cheating on a paper and dumbed down my own work to avoid it.


Surya Das is a sophomore at Riverdale Country Day School in New York City with a passion for computer science.


Read More

Teenager admiring electronic hobby robot.

Explore how China is overtaking the U.S. in the global innovation race, from electric vehicles to advanced research, and why America’s fragmented science policy, talent loss, and weak industrial strategy threaten its technological leadership.

Getty Images, Willie B. Thomas

America’s Greatest Geopolitical Blind Spot

The global hierarchy of innovation is undergoing a structural shift that Washington is dangerously slow to acknowledge. For decades, the prevailing narrative in the United States was that China was merely the "world’s factory"—a nation capable of mass-producing Western designs but inherently lacking the creative spark to invent its own. This assumption has been shattered. Today, Beijing is no longer playing catch-up; in sectors ranging from electric vehicles and next-generation nuclear power to hypersonic missiles, China is setting the pace.

The central challenge is that China has mastered the entire innovation ecosystem, while the United States has allowed its own to fracture. Innovation is not just about a "eureka" moment in a laboratory; it is a relay race that begins with basic scientific research, moves through the training of specialized talent, and ends with the large-scale commercialization of "hard tech." China is currently winning every leg of that race.

Keep ReadingShow less
An illustration of a person standing alone on a platform and looking at speech bubbles.

A bold critique of modern democracy and rising authoritarian ideas, exploring how AI-powered swarm digital democracy could redefine participation and governance.

Getty Images, Andriy Onufriyenko

The Only Radical Move Forward: Swarm Digital Democracy

We are increasingly told that democracy has failed and that its time has passed. The evidence proffered is everywhere, we are told: Gridlock, captured institutions, performative elections, a public that senses, correctly, that its voice rarely translates into real power. Into this vacuum step dystopic movements like the Dark Enlightenment and harder strains of Right-wing populism, offering a stark diagnosis and an even starker cure: Abandon the illusion of popular rule and return to forms of authority that are decisive, hierarchical, and unapologetically exclusionary. They present themselves as bold, clear-eyed, rambunctious, alive, and willing to act where others hesitate. And all to save the world from itself.

But this framing depends on a sleight of hand: It assumes that what we have been living under is, in fact, democracy, and that its failures are the failures of democracy itself. That is the first mistake.

Keep ReadingShow less
An illustration of orange-colored megaphones, one megaphone in the middle is red and facing the opposite direction of the others.

A growing crisis threatens U.S. public data. Experts warn disappearing federal datasets could undermine science, policy, and democracy—and outline a plan to protect them.

Getty Images, Richard Drury

America's Data Crisis: Saving Trusted Facts Is Essential to Democracy

In March 2026, more than a hundred information and data experts gathered in a converted Christian Science church to confront a problem most Americans never see, but that shapes nearly every public debate we have. The nonprofit Internet Archive convened this national Information Stewardship Forum at their San Francisco headquarters because something fundamental is breaking: the country’s shared foundation of facts.

For decades, the United States has relied on a vast ecosystem of federal data on health, climate, the economy, education, demographics, scientific research, and more. This data is the backbone of journalism, policymaking, scientific discovery, and public accountability. It is how we know whether the air is safe to breathe, whether unemployment is rising or falling, whether a new disease is spreading, or whether a community is being left behind.

Keep ReadingShow less
Man lying in his bed, on his phone at night.

As the 2026 election approaches, doomscrolling and social media are shaping voter behavior through fear and anxiety. Learn how digital news consumption influences political decisions—and how to break the cycle for more informed voting.

Getty Images, gorodenkoff

Americans Are Doomscrolling Their Way to the Ballot Box and Only Getting Empty Promises

As the spring primary cycle ramps up, voters are deciding which candidates to elect in the November general election, but too much doomscrolling on social media is leading to uninformed — and often anxiety-based — voting. Even though online platforms and politicians may be preying on our exhaustion to further their agendas, we don’t have to fall for it this election cycle.

Doomscrolling is, unfortunately, part of daily life for many of us. It involves consuming a virtually endless amount of negative social media posts and news content, causing us to feel scared and depressed. Our brains have a hardwired negativity bias that causes us to notice potential threats and focus on them. This is exacerbated by the fact that people who closely follow or participate in politics are more likely to doomscroll.

Keep ReadingShow less