Skip to content
Search

Latest Stories

Top Stories

AI is Fabricating Misinformation: A Call for AI Literacy in the Classroom

AI is Fabricating Misinformation: A Call for AI Literacy in the Classroom

Students using computers in a classroom.

Getty Images / Tom Werner

Want to learn something new? My suggestion: Don’t ask ChatGPT. While tech leaders promote generative AI tools as your new, go-to source for information, my experience as a university librarian suggests otherwise. Generative AI tools often produce “hallucinations,” in the form of fabricated misinformation that convincingly mimics actual, factual truth.

The concept of AI “hallucinations” came to my attention not long after the launch of ChatGPT. Librarians at universities and colleges throughout the country began to share a puzzling trend: students were spending time fruitlessly searching for books and articles that simply didn’t exist. It was only after questioning that students revealed their source as ChatGPT. In the tech world, these fabrications are called “hallucinations,” a term borrowed from psychiatry to describe sensory systems that become temporarily distorted. In this context, the term implies generative AI has human cognition, but it emphatically does not. The fabrications are outputs of non-human algorithms that can misinform – and too often, do.


In April of 2023, a news headline read: ChatGPT is making up fake Guardian articles. The story began by describing a surprising incident. A reader had inquired about an article that couldn’t be found. The reporter couldn’t remember having written such an article, but it “certainly sounded like something they would have written.” Colleagues attempted to track it down, only to discover that no such article had been published. As librarians had learned just weeks prior, ChatGPT had fabricated an article citation, but this time the title was so believable that even the reporter couldn’t remember if they’d written it.

Since the release of ChatGPT two years ago, OpenAI’s valuation has soared to $157 billion, which might suggest that hallucinations are no longer a problem. However, you’d be wrong. Hallucinations are not a ‘problem’ but an integral “ feature ” of how ChatGPT, and other generative AI tools, work. According to Kristian Hammon, Professor and Director of the Center for Advancing Safety of Machine Intelligence, “hallucinations are not bugs; they’re a fundamental part” of how generative AI works. In an essay describing the hallucination problem, he concludes, “Our focus shouldn’t be on eliminating hallucinations but on providing language models with the most accurate and up-to-date information possible…staying as close to the truth as the data allows.”

Companies like OpenAI have been slow to educate the public about this issue. For example, OpenAI released its first ChatGPT guide for students only in November 2024, almost 24 months after ChatGPT launched. Rather than explaining hallucinations, the guide states simply, “Since language models can generate inaccurate information, always double-check your facts.” Educating the public about fabricated misinformation and how to discern AI fact from fiction has not been a priority for OpenAI.

Even experts have difficulty deciphering AI’s fabrications. A Stanford University professor recently apologized for using citations generated by ChatGPT in a November 1 court filing supporting a Minnesota law banning political deepfakes. The citation links went to nonexistent journal articles and incorrect authors. The professor’s use of these citations has called his expertise into question and opened the door to excluding his declaration from the court’s consideration. Interestingly, he was paid $600 an hour to write the filing, and he researches “lying and technology.”

Jean-Christophe Bélisle-Pipon, a health sciences professor at Simon Fraser University in British Columbia, warns that AI hallucinations can have “life-threatening consequences” in medicine. He points out, “The standard disclaimers provided by models like ChatGPT, which warn that ‘ChatGPT can make mistakes. Check important info,’ are insufficient safeguards in clinical settings.” He suggests training medical professionals to understand that AI content is not always reliable, even though it may sound convincing.

To be sure, AI doesn’t always hallucinate and humans also make mistakes. When I explain the issue of AI hallucinations and the need for public education to students and friends, a common response is, “But, humans make mistakes, too.” That’s true–but we’re well-aware of human fallibility. That same awareness doesn’t extend to content created by AI tools like ChatGPT. Instead, humans have a well-documented tendency to believe automated tools, a phenomenon known as automation bias. The misinformation coming from AI tools is especially dangerous because it is less likely to be questioned. As Emily Bender, a professor of computational linguistics, summarized, “a system that is right 95% of the time is arguably more dangerous than one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%”.

Anyone using ChatGPT or other AI tools needs to understand that fabricated misinformation, “hallucinations”, are a problem. Beyond a simple technical glitch, hallucinations pose real dangers, from academic missteps to life-threatening medical errors. Fabricated misinformation is just one of the many challenges of living in an AI-infused world.

We have an ethical responsibility to teach students not only how to use AI but also how to critically evaluate AI inputs, processes, and outputs. Educational institutions have the opportunity and the obligation to create courses and initiatives that prepare students to confront the ethical challenges posed by AI, that is why we are currently developing a Center for AI Literacy and Ethics at Oregon State University. It is imperative that educational institutions, not corporations, lead the charge in educating our students about the ethical dimensions and critical use of AI.

Laurie Bridges is an instruction librarian and professor at Oregon State University. She recently taught “Generative AI and Society,” an OSU Honors College colloquium focused on AI literacy and ethics. Laurie Bridges is a Public Voices Fellow of the Op-Ed Project.

Read More

Xavier Becerra Steps Back Into California Politics

Xavier Becerra

Xavier Becerra Steps Back Into California Politics

Xavier Becerra is once again stepping onto familiar ground. After serving in Congress, leading California’s Department of Justice, and joining President Joe Biden’s Cabinet as Secretary of Health and Human Services, he is now seeking the governorship of his home state. His campaign marks both a return to local politics and a renewed confrontation with Donald Trump, now back in the White House.

Becerra’s message combines pragmatism and resistance. “We’ll continue to be a leader, a fighter, and a vision of what can be in the United States,” he said in his recent interview with Latino News Network. He recalled his years as California’s attorney general, when he “had to take him on” to defend the state’s laws and families. Between 2017 and 2021, Becerra filed or joined more than 120 lawsuits against the Trump administration, covering immigration, environmental protection, civil rights, and healthcare. “We were able to defend California, its values and its people,” he said.

Keep ReadingShow less
​Voting booths in a high school.

During a recent visit to Indianapolis, VP JD Vance pressed Indiana Republicans to consider mid-decade redistricting ahead of the 2026 midterms.

Getty Images, mphillips007

JD Vance Presses Indiana GOP To Redraw Congressional Map

On October 10, Vice President JD Vance visited Indianapolis to meet with Republican lawmakers, urging them to consider redrawing Indiana’s congressional map ahead of the 2026 midterm elections. The visit marked Vance’s third trip to the state in recent months, underscoring the Trump administration’s aggressive push to expand Republican control in Congress.

Vance’s meetings are part of a broader national strategy led by President Donald Trump to encourage GOP-led states to revise district boundaries mid-decade. States like Missouri and Texas have already passed new maps, while Indiana remains hesitant. Governor Mike Braun has met with Vance and other Republican leaders. Still, he has yet to commit to calling a special legislative session. Braun emphasized that any decision must ensure “fair representation for every Hoosier."

Keep ReadingShow less
A child looks into an empty fridge-freezer in a domestic kitchen.

The Trump administration’s suspension of the USDA’s Household Food Security Report halts decades of hunger data tracking.

Getty Images, Catherine Falls Commercial

Trump Gives Up the Fight Against Hunger

A Vanishing Measure of Hunger

Consider a hunger policy director at a state Department of Social Services studying food insecurity data across the state. For years, she has relied on the USDA’s annual Household Food Security Report to identify where hunger is rising, how many families are skipping meals, and how many children go to bed hungry. Those numbers help her target resources and advocate for stronger programs.

Now there is no new data. The survey has been “suspended for review,” officially to allow for a “methodological reassessment” and cost analysis. Critics say the timing and language suggest political motives. It is one of many federal data programs quietly dropped under a Trump executive order on so-called “nonessential statistics,” a phrase that almost parodies itself. Labeling hunger data “nonessential” is like turning off a fire alarm because it makes too much noise; it implies that acknowledging food insecurity is optional and reveals more about the administration’s priorities than reality.

Keep ReadingShow less
Standing Up for Democracy Requires Giving the Other Side Credit When It Is Deserved

U.S. President Donald Trump poses with the signed agreement at a world leaders' summit on ending the Gaza war on October 13, 2025 in Sharm El-Sheikh, Egypt.

(Photo by Suzanne Plunkett - Pool / Getty Images)

Standing Up for Democracy Requires Giving the Other Side Credit When It Is Deserved

American political leaders have forgotten how to be gracious to their opponents when people on the other side do something for which they deserve credit. Our antagonisms have become so deep and bitter that we are reluctant to give an inch to our political adversaries.

This is not good for democracy.

Keep ReadingShow less