Skip to content
Search

Latest Stories

Follow Us:
Top Stories

AI is Fabricating Misinformation: A Call for AI Literacy in the Classroom

AI is Fabricating Misinformation: A Call for AI Literacy in the Classroom

Students using computers in a classroom.

Getty Images / Tom Werner

Want to learn something new? My suggestion: Don’t ask ChatGPT. While tech leaders promote generative AI tools as your new, go-to source for information, my experience as a university librarian suggests otherwise. Generative AI tools often produce “hallucinations,” in the form of fabricated misinformation that convincingly mimics actual, factual truth.

The concept of AI “hallucinations” came to my attention not long after the launch of ChatGPT. Librarians at universities and colleges throughout the country began to share a puzzling trend: students were spending time fruitlessly searching for books and articles that simply didn’t exist. It was only after questioning that students revealed their source as ChatGPT. In the tech world, these fabrications are called “hallucinations,” a term borrowed from psychiatry to describe sensory systems that become temporarily distorted. In this context, the term implies generative AI has human cognition, but it emphatically does not. The fabrications are outputs of non-human algorithms that can misinform – and too often, do.


In April of 2023, a news headline read: ChatGPT is making up fake Guardian articles. The story began by describing a surprising incident. A reader had inquired about an article that couldn’t be found. The reporter couldn’t remember having written such an article, but it “certainly sounded like something they would have written.” Colleagues attempted to track it down, only to discover that no such article had been published. As librarians had learned just weeks prior, ChatGPT had fabricated an article citation, but this time the title was so believable that even the reporter couldn’t remember if they’d written it.

Since the release of ChatGPT two years ago, OpenAI’s valuation has soared to $157 billion, which might suggest that hallucinations are no longer a problem. However, you’d be wrong. Hallucinations are not a ‘problem’ but an integral “ feature ” of how ChatGPT, and other generative AI tools, work. According to Kristian Hammon, Professor and Director of the Center for Advancing Safety of Machine Intelligence, “hallucinations are not bugs; they’re a fundamental part” of how generative AI works. In an essay describing the hallucination problem, he concludes, “Our focus shouldn’t be on eliminating hallucinations but on providing language models with the most accurate and up-to-date information possible…staying as close to the truth as the data allows.”

Companies like OpenAI have been slow to educate the public about this issue. For example, OpenAI released its first ChatGPT guide for students only in November 2024, almost 24 months after ChatGPT launched. Rather than explaining hallucinations, the guide states simply, “Since language models can generate inaccurate information, always double-check your facts.” Educating the public about fabricated misinformation and how to discern AI fact from fiction has not been a priority for OpenAI.

Even experts have difficulty deciphering AI’s fabrications. A Stanford University professor recently apologized for using citations generated by ChatGPT in a November 1 court filing supporting a Minnesota law banning political deepfakes. The citation links went to nonexistent journal articles and incorrect authors. The professor’s use of these citations has called his expertise into question and opened the door to excluding his declaration from the court’s consideration. Interestingly, he was paid $600 an hour to write the filing, and he researches “lying and technology.”

Jean-Christophe Bélisle-Pipon, a health sciences professor at Simon Fraser University in British Columbia, warns that AI hallucinations can have “life-threatening consequences” in medicine. He points out, “The standard disclaimers provided by models like ChatGPT, which warn that ‘ChatGPT can make mistakes. Check important info,’ are insufficient safeguards in clinical settings.” He suggests training medical professionals to understand that AI content is not always reliable, even though it may sound convincing.

To be sure, AI doesn’t always hallucinate and humans also make mistakes. When I explain the issue of AI hallucinations and the need for public education to students and friends, a common response is, “But, humans make mistakes, too.” That’s true–but we’re well-aware of human fallibility. That same awareness doesn’t extend to content created by AI tools like ChatGPT. Instead, humans have a well-documented tendency to believe automated tools, a phenomenon known as automation bias. The misinformation coming from AI tools is especially dangerous because it is less likely to be questioned. As Emily Bender, a professor of computational linguistics, summarized, “a system that is right 95% of the time is arguably more dangerous than one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%”.

Anyone using ChatGPT or other AI tools needs to understand that fabricated misinformation, “hallucinations”, are a problem. Beyond a simple technical glitch, hallucinations pose real dangers, from academic missteps to life-threatening medical errors. Fabricated misinformation is just one of the many challenges of living in an AI-infused world.

We have an ethical responsibility to teach students not only how to use AI but also how to critically evaluate AI inputs, processes, and outputs. Educational institutions have the opportunity and the obligation to create courses and initiatives that prepare students to confront the ethical challenges posed by AI, that is why we are currently developing a Center for AI Literacy and Ethics at Oregon State University. It is imperative that educational institutions, not corporations, lead the charge in educating our students about the ethical dimensions and critical use of AI.

Laurie Bridges is an instruction librarian and professor at Oregon State University. She recently taught “Generative AI and Society,” an OSU Honors College colloquium focused on AI literacy and ethics. Laurie Bridges is a Public Voices Fellow of the Op-Ed Project.


Read More

Michigan group pushes to get big money out of politics

According to the Federal Election Commission, the 2024 election cycle saw billions spent on campaigns, with PACs alone raising more than $15 billion.

(Pexels)

Michigan group pushes to get big money out of politics

A new push to limit corporate money in Michigan politics appears to be gaining traction.

There is strong bipartisan support for restricting political contributions from regulated monopolies and companies seeking government contracts, according to a survey commissioned by supporters of the proposal. Supporters said the measure would increase transparency and reduce “pay-to-play” politics, where political donations can influence government decisions.

Keep ReadingShow less
People protesting in the Cannon House Office Building on Capitol Hill, holding tulips and signs that read, "We can't afford another war" and "end the war on iran.'

Veterans, military family members, and supporters occupy the Cannon House Office Building on Capitol Hill calling upon the Trump administration to end the war on Iran on April 20, 2026 in Washington, DC.

Getty Images, Leigh Vogel

Trump’s Iran “Victory” Echoes Iraq’s "Mission Accomplished"

It didn’t exactly end well the last time a president declared victory this quickly. On May 1, 2003, President George W. Bush landed on the USS Abraham Lincoln in a flight suit, strutted across the deck for the cameras, then changed into a suit and tie, stood in front of a banner that read “Mission Accomplished,” and declared the end of major combat operations in Iraq. It was 43 days after the invasion began. Over the next eight years, as the conflict devolved into a protracted insurgency and sectarian war, more than 4,300 Americans and hundreds of thousands of Iraqis died.

On April 7, Trump—presumably not wearing a flight suit—declared in a telephone interview with AFP that the United States had achieved victory in Iran. “Total and complete victory. 100 percent. No question about it.” This was the day after the President threatened to destroy a “whole civilization,” hours after a two-week ceasefire was announced. It took six days for the whole thing to fall apart. By April 15, he was back on Fox Business: “We've beaten them militarily, totally. I think it’s close to over.”

Keep ReadingShow less
ICE Director Requests Additional $5.4 Billion at Congressional Budget Hearing

CBP Chief Rodney Scott (left), Acting ICE Director Todd Lyons (middle) and USCIS Director Joseph Edlow (right) testify at budget hearing.

Jamie Gareh/Medill News Service)

ICE Director Requests Additional $5.4 Billion at Congressional Budget Hearing

WASHINGTON- The acting director of ICE on Thursday told Congress that while the Trump administration pumped $75 billion extra into ICE over four years, many activities remain cash starved and the agency needs about $5.4 billion in additional funding for 2027.

There’s misinformation with the Big Beautiful Bill that ICE is fully funded,” said Todd Lyons, acting director of ICE, whose resignation was announced later that day.

Keep ReadingShow less
People sitting at desks in an office.

A policy-driven look at AI-era job displacement and how “Transition Launch Pads” can speed reemployment through local hubs, retraining, and employer collaboration.

Getty Images, Bill Pugliano

Layoff Headlines Keep Coming, Policy Answers Don't. Here’s One Solution

Every week brings another round of displacement announcements. Tech companies, logistics firms, financial institutions, retailers — cutting headcount at a pace that no longer surprises anyone. The headlines are routine. What isn't routine — in fact, what is conspicuously absent — is any serious account of what comes next. Not for the companies. For the workers.

That absence is a policy failure, and it is getting more expensive for us all by the quarter. The longer folks remain unemployed, the greater the costs. The individual and their loved ones obviously suffer. The community does as well due to that productive individual sitting on the sidelines and the high costs of sustaining unemployment.

Keep ReadingShow less