Skip to content
Search

Latest Stories

Top Stories

The AI irony around Claudine Gay's resignation from Harvard

Claudine Gay and other university presidents testify before Congress

Claudine Gay (left ) testified before Congress on Dec. 5, 2023.

Burton is a history professor and director of the Humanities Research Institute at the University of Illinois, Urbana-Champaign. She is a public voices fellow with The OpEd Project.

When the history of Claudine Gay’s six-month tenure as Harvard’s president is written, there will be a lot of copy devoted to the short time between her appearance before Congress and her resignation from the highest office at one of the most prestigious and powerful institutions of higher education.

Two narratives will likely dominate.

One will be the highly orchestrated campaign – outlined in clinical, triumphant detail by conservative activist Chris Rufo – by the right to mobilize its highly coordinated media and communications machine to stalk Gay and link her resignation to accusations of plagiarism.

The other will be the response of liberal pundits and academics who saw in Gay’s fall a familiar pattern of pitting diversity against both excellence and merit, especially in the case of Black women whose successes must mean they have to be put back in their place.


Historians will read those two narratives as emblematic of the polarization of the 2020s, and of the way the political culture wars played out on the battleground of higher education.

Sign up for The Fulcrum newsletter

There must, of course, be a reckoning with the role that the Oct. 7, 2023, attack by Hamas on Israel and the killing of tens of thousands of Palestinians in the war on Gaza played in bringing Gay to book. And the congressional hearings will be called what they were: a show trial carried on with the kind of vengeance characteristic of mid-20th century totalitarian regimes.

Who knows, there may even be an epilogue that tracks the relationship of Gay’s downfall to the results of the 2024 presidential election.

But because the archive available to write this history is not limited to the war of words on the right and the left, the story they tell will hang on the most stunning, and underplayed, takeaway of all.

And no, it’s not that Melania Trump plagiarized from Michelle Obama’s speech.

It’s the fact that in the middle of a news cycle in which the media could not stop talking about the rise of ChatGPT, with its potential for deep fakery and misinformation and plagiarism of the highest order, what felled Harvard’s first Black woman president were allegations of failing to properly attribute quotes in the corpus of her published research.

Yes. In an age when the combination of muted panic and principled critique of ChatGPT across all levels the U.S. education system meets with the kind of scorn — or patronizing reassurance — that only a multibillion dollar industry hellbent on financializing artificial intelligence beyond anything seen in the history of capitalism could mobilize, what brought a university president to her knees were accusations that she relied too heavily on the words of others, such that the “truth” of her work was in question.

Falsifying everything from election claims to the validity of disinfectant as a cure for Covid-19 is standard fare on the far right. The irony is that those on the right went to the bank on the assumption that the biggest disgrace a Harvard professor could face is an accusation of plagiarism.

Chroniclers of this moment will not fail to note the irony that we were living in the surround-sound of ChatGPT, which will surely go down as the biggest cheating engine in history. Students are using it to do everything from correcting their grammar to outright cutting and pasting text generated by AI and calling it their own. There is a genuine crisis in higher education around the ethics of these practices and about what plagiarism means now.

There’s no defense of plagiarism regardless of who practices it. And if, as the New York Post reports, Harvard tried to suppress its own failed investigation of Gay’s research, that’s a serious breach of ethics.

Meanwhile historians, who look beyond the immediacy of an event in order to understand its wider significance, will call attention to the elephant in the room in 2024: the potentially dangerous impact of AI, ChatGPT and others like it on our democracy. While AI can assist in investigative reporting, it can also be abused to mislead voters, impersonate candidates and undermine trust in elections. This is the wider significance of the Gay investigation.

And worse: It got praise for its ability to self-correct — to mask the inauthenticity of its words more and more successfully — in every story that covers the wonder, and the inevitability, of AI.

There’s no collusion here. But it’s a mighty perverse coincidence hiding in plain sight.

So when the history of our time is written, be sure to look for the story of how AI’s capacity for monetizing plagiarism ramped up as Claudine Gay’s career imploded. It will be in a chapter called “Theater of the Absurd.”

Unless, of course, it’s written out of the history books by ChatGPT itself.

Read More

People looking at a humanoid robot

Spectators look at Tesla's Core Technology Optimus humanoid robot at a conference in Shanghai, China, in September.

CFOTO/Future Publishing via Getty Images

Rainy day fund would help people who lose their jobs thanks to AI

Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.

Artificial intelligence will eliminate jobs.

Companies may not need as many workers as AI increases productivity. Others may simply be swapped out for automated systems. Call it what you want — displacement, replacement or elimination — but the outcome is the same: stagnant, struggling communities. The open question is whether we will learn from mistakes. Will we proactively take steps to support the communities most likely to bear the cost of “innovation.”

Keep ReadingShow less
Doctor using AI technology
Akarapong Chairean/Getty Images

What's next for the consumer revolution in health care?

Pearl, the author of “ChatGPT, MD,” teaches at both the Stanford University School of Medicine and the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.

For years, patients have wondered why health care can’t be as seamless as other services in their lives. They can book flights or shop for groceries with a few clicks, yet they still need to take time off work and drive to the doctor’s office for routine care.

Two advances are now changing thisoutdated model and ushering in a new era of health care consumerism. With at-home diagnostics and generative artificial intelligence, patients are beginning to take charge of their health in wayspreviously unimaginable.

Keep ReadingShow less
Close-up of boy looking at his phone in the dark
Anastasiia Sienotova/Getty Images

Reality bytes: Kids confuse the real world with the screen world

Patel is an executive producer/director, the creator of “ConnectEffect” and a Builders movement partner.

Doesn’t it feel like summer break just began? Yet here we are again. Fall’s arrival means kids have settled into a new school year with new teachers, new clothes and a new “attitude” for parents and kids alike, to start on the right foot.

Yet it’s hard for any of us to find footing in an increasingly polarized and isolated world. The entire nation is grappling with a rising tide of mental health concerns — including the continually increasing alienation and loneliness in children — and parents are struggling to foster real human connection for their kids in the real world. The battle to minimize screen time is certainly one approach. But in a world that is based on screens, apps and social media, is it a battle that realistically can be won?

Keep ReadingShow less
NVIDIA headquarters

Our stock market pivots on the performance of a handful of AI-focused companies like Nvidia.

hapabapa/Getty Images

We may face another 'too big to fail' scenario as AI labs go unchecked

Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.

In the span of two or so years, OpenAI, Nvidia and a handful of other companies essential to the development of artificial intelligence have become economic behemoths. Their valuations and stock prices have soared. Their products have become essential to Fortune 500 companies. Their business plans are the focus of the national security industry. Their collapse would be, well, unacceptable. They are too big to fail.

The good news is we’ve been in similar situations before. The bad news is we’ve yet to really learn our lesson.

Keep ReadingShow less