Skip to content
Search

Latest Stories

Top Stories

AI is Fabricating Misinformation: A Call for AI Literacy in the Classroom

AI is Fabricating Misinformation: A Call for AI Literacy in the Classroom

Students using computers in a classroom.

Getty Images / Tom Werner

Want to learn something new? My suggestion: Don’t ask ChatGPT. While tech leaders promote generative AI tools as your new, go-to source for information, my experience as a university librarian suggests otherwise. Generative AI tools often produce “hallucinations,” in the form of fabricated misinformation that convincingly mimics actual, factual truth.

The concept of AI “hallucinations” came to my attention not long after the launch of ChatGPT. Librarians at universities and colleges throughout the country began to share a puzzling trend: students were spending time fruitlessly searching for books and articles that simply didn’t exist. It was only after questioning that students revealed their source as ChatGPT. In the tech world, these fabrications are called “hallucinations,” a term borrowed from psychiatry to describe sensory systems that become temporarily distorted. In this context, the term implies generative AI has human cognition, but it emphatically does not. The fabrications are outputs of non-human algorithms that can misinform – and too often, do.


In April of 2023, a news headline read: ChatGPT is making up fake Guardian articles. The story began by describing a surprising incident. A reader had inquired about an article that couldn’t be found. The reporter couldn’t remember having written such an article, but it “certainly sounded like something they would have written.” Colleagues attempted to track it down, only to discover that no such article had been published. As librarians had learned just weeks prior, ChatGPT had fabricated an article citation, but this time the title was so believable that even the reporter couldn’t remember if they’d written it.

Since the release of ChatGPT two years ago, OpenAI’s valuation has soared to $157 billion, which might suggest that hallucinations are no longer a problem. However, you’d be wrong. Hallucinations are not a ‘problem’ but an integral “ feature ” of how ChatGPT, and other generative AI tools, work. According to Kristian Hammon, Professor and Director of the Center for Advancing Safety of Machine Intelligence, “hallucinations are not bugs; they’re a fundamental part” of how generative AI works. In an essay describing the hallucination problem, he concludes, “Our focus shouldn’t be on eliminating hallucinations but on providing language models with the most accurate and up-to-date information possible…staying as close to the truth as the data allows.”

Companies like OpenAI have been slow to educate the public about this issue. For example, OpenAI released its first ChatGPT guide for students only in November 2024, almost 24 months after ChatGPT launched. Rather than explaining hallucinations, the guide states simply, “Since language models can generate inaccurate information, always double-check your facts.” Educating the public about fabricated misinformation and how to discern AI fact from fiction has not been a priority for OpenAI.

Even experts have difficulty deciphering AI’s fabrications. A Stanford University professor recently apologized for using citations generated by ChatGPT in a November 1 court filing supporting a Minnesota law banning political deepfakes. The citation links went to nonexistent journal articles and incorrect authors. The professor’s use of these citations has called his expertise into question and opened the door to excluding his declaration from the court’s consideration. Interestingly, he was paid $600 an hour to write the filing, and he researches “lying and technology.”

Jean-Christophe Bélisle-Pipon, a health sciences professor at Simon Fraser University in British Columbia, warns that AI hallucinations can have “life-threatening consequences” in medicine. He points out, “The standard disclaimers provided by models like ChatGPT, which warn that ‘ChatGPT can make mistakes. Check important info,’ are insufficient safeguards in clinical settings.” He suggests training medical professionals to understand that AI content is not always reliable, even though it may sound convincing.

To be sure, AI doesn’t always hallucinate and humans also make mistakes. When I explain the issue of AI hallucinations and the need for public education to students and friends, a common response is, “But, humans make mistakes, too.” That’s true–but we’re well-aware of human fallibility. That same awareness doesn’t extend to content created by AI tools like ChatGPT. Instead, humans have a well-documented tendency to believe automated tools, a phenomenon known as automation bias. The misinformation coming from AI tools is especially dangerous because it is less likely to be questioned. As Emily Bender, a professor of computational linguistics, summarized, “a system that is right 95% of the time is arguably more dangerous than one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%”.

Anyone using ChatGPT or other AI tools needs to understand that fabricated misinformation, “hallucinations”, are a problem. Beyond a simple technical glitch, hallucinations pose real dangers, from academic missteps to life-threatening medical errors. Fabricated misinformation is just one of the many challenges of living in an AI-infused world.

We have an ethical responsibility to teach students not only how to use AI but also how to critically evaluate AI inputs, processes, and outputs. Educational institutions have the opportunity and the obligation to create courses and initiatives that prepare students to confront the ethical challenges posed by AI, that is why we are currently developing a Center for AI Literacy and Ethics at Oregon State University. It is imperative that educational institutions, not corporations, lead the charge in educating our students about the ethical dimensions and critical use of AI.

Laurie Bridges is an instruction librarian and professor at Oregon State University. She recently taught “Generative AI and Society,” an OSU Honors College colloquium focused on AI literacy and ethics. Laurie Bridges is a Public Voices Fellow of the Op-Ed Project.

Read More

What Bad Bunny Can Teach Us About Leadership, Belonging, and the Power of Place

Bad Bunny accepts the Best Urban Song award for "LA MuDANZA" onstage during the 26th Annual Latin Grammy Awards at the MGM Grand Garden Arena on November 13, 2025 in Las Vegas, Nevada.

(Photo by John Parra/Getty Images for The Latin Recording Academy)

What Bad Bunny Can Teach Us About Leadership, Belonging, and the Power of Place

Bad Bunny is everywhere, from Spotify’s top charts to sold-out stadiums that pulse like heartbeats. The pride that emanates from la isla de Puerto Rico, with its native son is palpable. The ownership every Puerto Rican, from the island to the diaspora, feels at this moment —over their culture, their identity —is hard to understate.

This sense of belonging and pride is something I explore in my new book, Sentido: Finding Sense and Purpose in Design Leadership. Part memoir, part guide, it reflects on what it means to be Puerto Rican, Nuyorican, and multiethnic — and how that layered identity shapes the way I understand connection, purpose, and presence.

Keep ReadingShow less
What Bad Bunny Can Teach Us About Leadership, Belonging, and the Power of Place

Bad Bunny accepts the Best Urban Song award for "LA MuDANZA" onstage during the 26th Annual Latin Grammy Awards at the MGM Grand Garden Arena on November 13, 2025 in Las Vegas, Nevada.

(Photo by John Parra/Getty Images for The Latin Recording Academy)

What Bad Bunny Can Teach Us About Leadership, Belonging, and the Power of Place

Bad Bunny is everywhere, from Spotify’s top charts to sold-out stadiums that pulse like heartbeats. The pride that emanates from la isla de Puerto Rico, with its native son is palpable. The ownership every Puerto Rican, from the island to the diaspora, feels at this moment —over their culture, their identity —is hard to understate.

This sense of belonging and pride is something I explore in my new book, Sentido: Finding Sense and Purpose in Design Leadership. Part memoir, part guide, it reflects on what it means to be Puerto Rican, Nuyorican, and multiethnic — and how that layered identity shapes the way I understand connection, purpose, and presence.

Keep ReadingShow less
Someone holding a remote, pointing it to a TV.

A deep look at how "All in the Family" remains a striking mirror of American politics, class tensions, and cultural manipulation—proving its relevance decades later.

Getty Images, SimpleImages

All in This American Family

There are a few shows that have aged as eerily well as All in the Family.

It’s not just that it’s still funny and has the feel not of a sit-com, but of unpretentious, working-class theatre. It’s that, decades later, it remains one of the clearest windows into the American psyche. Archie Bunker’s living room has been, as it were, a small stage on which the country has been working through the same contradictions, anxieties, and unresolved traumas that still shape our politics today. The manipulation of the working class, the pitting of neighbor against neighbor, the scapegoating of the vulnerable, the quiet cruelties baked into everyday life—all of it is still here with us. We like to reassure ourselves that we’ve progressed since the early 1970s, but watching the show now forces an unsettling recognition: The structural forces that shaped Archie’s world have barely budged. The same tactics of distraction and division deployed by elites back then are still deployed now, except more efficiently, more sleekly.

Keep ReadingShow less
Trump's Deregulation Lure: A Wage Squeeze for the Global South
person using black laptop computer
Photo by Kanchanara on Unsplash

Trump's Deregulation Lure: A Wage Squeeze for the Global South

When Colm Kelleher, chairman of UBS, sat down with Scott Bessent in recent months to discuss uprooting the bank's headquarters from Zurich to New York, it was more than corporate maneuvering. It was a signal flare for the financial world under Donald Trump's second term. Bessent promised a regulatory bonfire that could slash compliance costs and open the floodgates for American finance. The reported talks underscore a broader shift: the United States is apparently positioning itself as the unassailable hub of global capital, drawing in institutions like UBS with tax breaks and lighter oversight. Yet this allure comes at a steep price for emerging markets, where wage growth is already fragile. What looks like a boom for American workers masks a quiet trap, one that could deepen the divide between rich nations and the rest.

Bessent's vision, laid out in private conversations and public hints, paints a picture of American exceptionalism reborn. He has warned of a "perfect storm" of inherited inflation and supply disruptions from the Biden years, now to be tamed by aggressive deregulation and targeted tariffs. In one recent interview, he blamed soaring beef prices on a mix of migrant-driven cattle issues and lingering policy failures, framing Trump's agenda as the corrective force. The rhetoric is folksy, but the policy is sharp: roll back rules that hobble banks, lure foreign firms stateside, and shield domestic industries with import duties. UBS's flirtation with relocation fits neatly here. Across the Atlantic, Trump offers relief: no more endless stress tests, faster mergers, and a friendlier tax code. If UBS moves, it could save hundreds of millions annually in regulatory overhead, funneling those gains into higher bonuses for its New York traders.

Keep ReadingShow less