Skip to content
Search

Latest Stories

Top Stories

AI is Fabricating Misinformation: A Call for AI Literacy in the Classroom

AI is Fabricating Misinformation: A Call for AI Literacy in the Classroom

Students using computers in a classroom.

Getty Images / Tom Werner

Want to learn something new? My suggestion: Don’t ask ChatGPT. While tech leaders promote generative AI tools as your new, go-to source for information, my experience as a university librarian suggests otherwise. Generative AI tools often produce “hallucinations,” in the form of fabricated misinformation that convincingly mimics actual, factual truth.

The concept of AI “hallucinations” came to my attention not long after the launch of ChatGPT. Librarians at universities and colleges throughout the country began to share a puzzling trend: students were spending time fruitlessly searching for books and articles that simply didn’t exist. It was only after questioning that students revealed their source as ChatGPT. In the tech world, these fabrications are called “hallucinations,” a term borrowed from psychiatry to describe sensory systems that become temporarily distorted. In this context, the term implies generative AI has human cognition, but it emphatically does not. The fabrications are outputs of non-human algorithms that can misinform – and too often, do.


In April of 2023, a news headline read: ChatGPT is making up fake Guardian articles. The story began by describing a surprising incident. A reader had inquired about an article that couldn’t be found. The reporter couldn’t remember having written such an article, but it “certainly sounded like something they would have written.” Colleagues attempted to track it down, only to discover that no such article had been published. As librarians had learned just weeks prior, ChatGPT had fabricated an article citation, but this time the title was so believable that even the reporter couldn’t remember if they’d written it.

Sign up for The Fulcrum newsletter

Since the release of ChatGPT two years ago, OpenAI’s valuation has soared to $157 billion, which might suggest that hallucinations are no longer a problem. However, you’d be wrong. Hallucinations are not a ‘problem’ but an integral “feature” of how ChatGPT, and other generative AI tools, work. According to Kristian Hammon, Professor and Director of the Center for Advancing Safety of Machine Intelligence, “hallucinations are not bugs; they’re a fundamental part” of how generative AI works. In an essay describing the hallucination problem, he concludes, “Our focus shouldn’t be on eliminating hallucinations but on providing language models with the most accurate and up-to-date information possible…staying as close to the truth as the data allows.”

Companies like OpenAI have been slow to educate the public about this issue. For example, OpenAI released its first ChatGPT guide for students only in November 2024, almost 24 months after ChatGPT launched. Rather than explaining hallucinations, the guide states simply, “Since language models can generate inaccurate information, always double-check your facts.” Educating the public about fabricated misinformation and how to discern AI fact from fiction has not been a priority for OpenAI.

Even experts have difficulty deciphering AI’s fabrications. A Stanford University professor recently apologized for using citations generated by ChatGPT in a November 1 court filing supporting a Minnesota law banning political deepfakes. The citation links went to nonexistent journal articles and incorrect authors. The professor’s use of these citations has called his expertise into question and opened the door to excluding his declaration from the court’s consideration. Interestingly, he was paid $600 an hour to write the filing, and he researches “lying and technology.”

Jean-Christophe Bélisle-Pipon, a health sciences professor at Simon Fraser University in British Columbia, warns that AI hallucinations can have “life-threatening consequences” in medicine. He points out, “The standard disclaimers provided by models like ChatGPT, which warn that ‘ChatGPT can make mistakes. Check important info,’ are insufficient safeguards in clinical settings.” He suggests training medical professionals to understand that AI content is not always reliable, even though it may sound convincing.

To be sure, AI doesn’t always hallucinate and humans also make mistakes. When I explain the issue of AI hallucinations and the need for public education to students and friends, a common response is, “But, humans make mistakes, too.” That’s true–but we’re well-aware of human fallibility. That same awareness doesn’t extend to content created by AI tools like ChatGPT. Instead, humans have a well-documented tendency to believe automated tools, a phenomenon known as automation bias. The misinformation coming from AI tools is especially dangerous because it is less likely to be questioned. As Emily Bender, a professor of computational linguistics, summarized, “a system that is right 95% of the time is arguably more dangerous than one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%”.

Anyone using ChatGPT or other AI tools needs to understand that fabricated misinformation, “hallucinations”, are a problem. Beyond a simple technical glitch, hallucinations pose real dangers, from academic missteps to life-threatening medical errors. Fabricated misinformation is just one of the many challenges of living in an AI-infused world.

We have an ethical responsibility to teach students not only how to use AI but also how to critically evaluate AI inputs, processes, and outputs. Educational institutions have the opportunity and the obligation to create courses and initiatives that prepare students to confront the ethical challenges posed by AI, that is why we are currently developing a Center for AI Literacy and Ethics at Oregon State University. It is imperative that educational institutions, not corporations, lead the charge in educating our students about the ethical dimensions and critical use of AI.

Laurie Bridges is an instruction librarian and professor at Oregon State University. She recently taught “Generative AI and Society,” an OSU Honors College colloquium focused on AI literacy and ethics. Laurie Bridges is a Public Voices Fellow of the Op-Ed Project.

Read More

Trump to the Nation: "We're Just Getting Started"

U.S. President Donald Trump speaks to a joint session of Congress at the U.S. Capitol on March 04, 2025 in Washington, DC. President Trump is speaking about the early achievements of his presidency and his upcoming legislative agenda.

(Photo by Mandel Ngan-Pool/Getty Images)

Trump to the Nation: "We're Just Getting Started"

On Tuesday, President Donald Trump addressed a joint session of Congress, emphasizing that his administration is “just getting started” in the wake of a contentious beginning to his second term. Significant themes, including substantial cuts to the federal workforce, shifts in traditional American alliances, and the impact of an escalating trade war on markets, characterized his address.

In his speech, Trump highlighted his actions over the past six weeks, claiming to have signed nearly 100 executive orders and taken over 400 executive actions to restore “common sense, safety, optimism, and wealth” across the country. He articulated that the electorate entrusted him with the leadership role and stressed that he was fulfilling that mandate.

Keep ReadingShow less
Trump’s Tariffs: a burden on workers, a boon for the wealthy

An illustration of a deconstructed dollar bill.

Getty Images, rob dobi

Trump’s Tariffs: a burden on workers, a boon for the wealthy

Earlier this year, President Trump imposed tariffs on Canada, Mexico, and China, claiming they would fix trade imbalances and protect jobs. However, instead of helping American workers, these tariffs act as hidden taxes; they drive up costs and feed inflation. While average Americans bear the brunt of higher prices and lost jobs, the wealthy are insulated from the worst effects.

Many economists assert that tariffs are stealth taxes, that is, the burden is not distributed equally—while corporations may adjust by diversifying suppliers or passing costs along, working households cannot escape higher prices on essential goods like groceries and electronics. Analysts estimate these tariffs could add $1,250 to the annual cost of living for the average American household—a substantial burden for families already struggling with inflation. Additionally, according to the well-regarded Tax Foundation, the tariffs are projected to reduce GDP by 0.5% and result in the loss of approximately 292,000 jobs.

Keep ReadingShow less
Veterans diagnosed with asbestos-related diseases should apply for compensation

An individual applying for a program online.

Getty Images, Inti St Clair

Veterans diagnosed with asbestos-related diseases should apply for compensation

In 1922, the U.S. Navy identified asbestos as the most efficient material for shipbuilding insulation and equipment production due to its heat resistance and durability. The naturally occurring asbestos mineral was also the most abundant and cost-effective material on the market. During the difficult WWII years, asbestos became critical to the U.S. Military, especially for the U.S. Navy and the U.S. Air Force: shipping and shipbuilding were essential, and parts of the military aircraft and incendiary bombs also contained asbestos.

Even as demand exceeded supply, in 1942, a presidential order banned the use of asbestos for non-military purposes until 1945. The application of asbestos-based material by the Military continued to increase until the 1970s when its carcinogenic nature came to light, and the use of asbestos started to be regulated but not banned.

Keep ReadingShow less
S.E. Cupp: Where is the Democratic Party’s Ronald Reagan?

President Joe Biden and President-elect Donald Trump arrive for the inauguration ceremony in the U.S. Capitol rotunda in Washington, D.C., on Jan. 20, 2025.

Getty Images/TCA, Melina Mara/POOL/AFP

S.E. Cupp: Where is the Democratic Party’s Ronald Reagan?

With all the attention deservedly on President Trump and what he intends to do with his defiant return to the White House, there’s a more than good chance we’ll spend the next four years consumed once again by all things Trump.

There’s already been a dizzying amount: a giant raft of executive orders; attacks on a constitutional amendment; his threats to invade sovereign nations; a seeming Nazi salute from one of his biggest surrogates; his sweeping Jan. 6 pardons; his beef with a bishop; his TikTok flip-flop; his billion-dollar meme coin controversy; scathing new allegations against one of his Cabinet picks; unilaterally renaming a body of water; a federal crackdown on DEI; promises of immigration raids across major cities. All this in just the first three days of Trump’s second term.

Keep ReadingShow less