Skip to content
Search

Latest Stories

Top Stories

The AI irony around Claudine Gay's resignation from Harvard

Opinion

Claudine Gay and other university presidents testify before Congress

Claudine Gay (left ) testified before Congress on Dec. 5, 2023.

Burton is a history professor and director of the Humanities Research Institute at the University of Illinois, Urbana-Champaign. She is a public voices fellow with The OpEd Project.

When the history of Claudine Gay’s six-month tenure as Harvard’s president is written, there will be a lot of copy devoted to the short time between her appearance before Congress and her resignation from the highest office at one of the most prestigious and powerful institutions of higher education.

Two narratives will likely dominate.

One will be the highly orchestrated campaign – outlined in clinical, triumphant detail by conservative activist Chris Rufo – by the right to mobilize its highly coordinated media and communications machine to stalk Gay and link her resignation to accusations of plagiarism.

The other will be the response of liberal pundits and academics who saw in Gay’s fall a familiar pattern of pitting diversity against both excellence and merit, especially in the case of Black women whose successes must mean they have to be put back in their place.


Historians will read those two narratives as emblematic of the polarization of the 2020s, and of the way the political culture wars played out on the battleground of higher education.

There must, of course, be a reckoning with the role that the Oct. 7, 2023, attack by Hamas on Israel and the killing of tens of thousands of Palestinians in the war on Gaza played in bringing Gay to book. And the congressional hearings will be called what they were: a show trial carried on with the kind of vengeance characteristic of mid-20th century totalitarian regimes.

Who knows, there may even be an epilogue that tracks the relationship of Gay’s downfall to the results of the 2024 presidential election.

But because the archive available to write this history is not limited to the war of words on the right and the left, the story they tell will hang on the most stunning, and underplayed, takeaway of all.

And no, it’s not that Melania Trump plagiarized from Michelle Obama’s speech.

It’s the fact that in the middle of a news cycle in which the media could not stop talking about the rise of ChatGPT, with its potential for deep fakery and misinformation and plagiarism of the highest order, what felled Harvard’s first Black woman president were allegations of failing to properly attribute quotes in the corpus of her published research.

Yes. In an age when the combination of muted panic and principled critique of ChatGPT across all levels the U.S. education system meets with the kind of scorn — or patronizing reassurance — that only a multibillion dollar industry hellbent on financializing artificial intelligence beyond anything seen in the history of capitalism could mobilize, what brought a university president to her knees were accusations that she relied too heavily on the words of others, such that the “truth” of her work was in question.

Falsifying everything from election claims to the validity of disinfectant as a cure for Covid-19 is standard fare on the far right. The irony is that those on the right went to the bank on the assumption that the biggest disgrace a Harvard professor could face is an accusation of plagiarism.

Chroniclers of this moment will not fail to note the irony that we were living in the surround-sound of ChatGPT, which will surely go down as the biggest cheating engine in history. Students are using it to do everything from correcting their grammar to outright cutting and pasting text generated by AI and calling it their own. There is a genuine crisis in higher education around the ethics of these practices and about what plagiarism means now.

There’s no defense of plagiarism regardless of who practices it. And if, as the New York Post reports, Harvard tried to suppress its own failed investigation of Gay’s research, that’s a serious breach of ethics.

Meanwhile historians, who look beyond the immediacy of an event in order to understand its wider significance, will call attention to the elephant in the room in 2024: the potentially dangerous impact of AI, ChatGPT and others like it on our democracy. While AI can assist in investigative reporting, it can also be abused to mislead voters, impersonate candidates and undermine trust in elections. This is the wider significance of the Gay investigation.

And worse: It got praise for its ability to self-correct — to mask the inauthenticity of its words more and more successfully — in every story that covers the wonder, and the inevitability, of AI.

There’s no collusion here. But it’s a mighty perverse coincidence hiding in plain sight.

So when the history of our time is written, be sure to look for the story of how AI’s capacity for monetizing plagiarism ramped up as Claudine Gay’s career imploded. It will be in a chapter called “Theater of the Absurd.”

Unless, of course, it’s written out of the history books by ChatGPT itself.

Read More

King, Pope, Jedi, Superman: Trump’s Social Media Images Exclusively Target His Base and Try To Blur Political Reality

Two Instagram images put out by the White House.

White House Instagram

King, Pope, Jedi, Superman: Trump’s Social Media Images Exclusively Target His Base and Try To Blur Political Reality

A grim-faced President Donald J. Trump looks out at the reader, under the headline “LAW AND ORDER.” Graffiti pictured in the corner of the White House Facebook post reads “Death to ICE.” Beneath that, a photo of protesters, choking on tear gas. And underneath it all, a smaller headline: “President Trump Deploys 2,000 National Guard After ICE Agents Attacked, No Mercy for Lawless Riots and Looters.”

The official communication from the White House appeared on Facebook in June 2025, after Trump sent in troops to quell protests against Immigration and Customs Enforcement agents in Los Angeles. Visually, it is melodramatic, almost campy, resembling a TV promotion.

Keep ReadingShow less
When the Lights Go Out — and When They Never Do
a person standing in a doorway with a light coming through it

When the Lights Go Out — and When They Never Do

The massive outage that crippled Amazon Web Services this past October 20th sent shockwaves through the digital world. Overnight, the invisible backbone of our online lives buckled: Websites went dark, apps froze, transactions stalled, and billions of dollars in productivity and trust evaporated. For a few hours, the modern economy’s nervous system failed. And in that silence, something was revealed — how utterly dependent we have become on a single corporate infrastructure to keep our civilization’s pulse steady.

When Amazon sneezes, the world catches a fever. That is not a mark of efficiency or innovation. It is evidence of recklessness. For years, business leaders have mocked antitrust reformers like FTC Chair Lina Khan, dismissing warnings about the dangers of monopoly concentration as outdated paranoia. But the AWS outage was not a cyberattack or an act of God — it was simply the predictable outcome of a world that has traded resilience for convenience, diversity for cost-cutting, and independence for “efficiency.” Executives who proudly tout their “risk management frameworks” now find themselves helpless before a single vendor’s internal failure.

Keep ReadingShow less
Fear of AI Makes for Bad Policy
Getty Images

Fear of AI Makes for Bad Policy

Fear is the worst possible response to AI. Actions taken out of fear are rarely a good thing, especially when it comes to emerging technology. Empirically-driven scrutiny, on the other hand, is a savvy and necessary reaction to technologies like AI that introduce great benefits and harms. The difference is allowing emotions to drive policy rather than ongoing and rigorous evaluation.

A few reminders of tech policy gone wrong, due, at least in part, to fear, helps make this point clear. Fear is what has led the US to become a laggard in nuclear energy, while many of our allies and adversaries enjoy cheaper, more reliable energy. Fear is what explains opposition to autonomous vehicles in some communities, while human drivers are responsible for 120 deaths per day, as of 2022. Fear is what sustains delays in making drones more broadly available, even though many other countries are tackling issues like rural access to key medicine via drones.

Keep ReadingShow less
A child looking at a smartphone.

With autism rates doubling every decade, scientists are reexamining environmental and behavioral factors. Could the explosion of social media use since the 1990s be influencing neurodevelopment? A closer look at the data, the risks, and what research must uncover next.

Getty Images, Arindam Ghosh

The Increase in Autism and Social Media – Coincidence or Causal?

Autism has been in the headlines recently because of controversy over Robert F. Kennedy, Jr's statements. But forgetting about Kennedy, autism is headline-worthy because of the huge increase in its incidence over the past two decades and its potential impact on not just the individual children but the health and strength of our country.

In the 1990s, a new definition of autism—ASD (Autism Spectrum Disorder)—was universally adopted. Initially, the prevalence rate was pretty stable. In the year 2,000, with this broader definition and better diagnosis, the CDC estimated that one in 150 eight-year-olds in the U.S. had an autism spectrum disorder. (The reports always study eight-year-olds, so this data was for children born in 1992.)

Keep ReadingShow less