Want to learn something new? My suggestion: Don’t ask ChatGPT. While tech leaders promote generative AI tools as your new, go-to source for information, my experience as a university librarian suggests otherwise. Generative AI tools often produce “hallucinations,” in the form of fabricated misinformation that convincingly mimics actual, factual truth.
The concept of AI “hallucinations” came to my attention not long after the launch of ChatGPT. Librarians at universities and colleges throughout the country began to share a puzzling trend: students were spending time fruitlessly searching for books and articles that simply didn’t exist. It was only after questioning that students revealed their source as ChatGPT. In the tech world, these fabrications are called “hallucinations,” a term borrowed from psychiatry to describe sensory systems that become temporarily distorted. In this context, the term implies generative AI has human cognition, but it emphatically does not. The fabrications are outputs of non-human algorithms that can misinform – and too often, do.
In April of 2023, a news headline read: ChatGPT is making up fake Guardian articles. The story began by describing a surprising incident. A reader had inquired about an article that couldn’t be found. The reporter couldn’t remember having written such an article, but it “certainly sounded like something they would have written.” Colleagues attempted to track it down, only to discover that no such article had been published. As librarians had learned just weeks prior, ChatGPT had fabricated an article citation, but this time the title was so believable that even the reporter couldn’t remember if they’d written it.
Since the release of ChatGPT two years ago, OpenAI’s valuation has soared to $157 billion, which might suggest that hallucinations are no longer a problem. However, you’d be wrong. Hallucinations are not a ‘problem’ but an integral “ feature ” of how ChatGPT, and other generative AI tools, work. According to Kristian Hammon, Professor and Director of the Center for Advancing Safety of Machine Intelligence, “hallucinations are not bugs; they’re a fundamental part” of how generative AI works. In an essay describing the hallucination problem, he concludes, “Our focus shouldn’t be on eliminating hallucinations but on providing language models with the most accurate and up-to-date information possible…staying as close to the truth as the data allows.”
Companies like OpenAI have been slow to educate the public about this issue. For example, OpenAI released its first ChatGPT guide for students only in November 2024, almost 24 months after ChatGPT launched. Rather than explaining hallucinations, the guide states simply, “Since language models can generate inaccurate information, always double-check your facts.” Educating the public about fabricated misinformation and how to discern AI fact from fiction has not been a priority for OpenAI.
Even experts have difficulty deciphering AI’s fabrications. A Stanford University professor recently apologized for using citations generated by ChatGPT in a November 1 court filing supporting a Minnesota law banning political deepfakes. The citation links went to nonexistent journal articles and incorrect authors. The professor’s use of these citations has called his expertise into question and opened the door to excluding his declaration from the court’s consideration. Interestingly, he was paid $600 an hour to write the filing, and he researches “lying and technology.”
Jean-Christophe Bélisle-Pipon, a health sciences professor at Simon Fraser University in British Columbia, warns that AI hallucinations can have “life-threatening consequences” in medicine. He points out, “The standard disclaimers provided by models like ChatGPT, which warn that ‘ChatGPT can make mistakes. Check important info,’ are insufficient safeguards in clinical settings.” He suggests training medical professionals to understand that AI content is not always reliable, even though it may sound convincing.
To be sure, AI doesn’t always hallucinate and humans also make mistakes. When I explain the issue of AI hallucinations and the need for public education to students and friends, a common response is, “But, humans make mistakes, too.” That’s true–but we’re well-aware of human fallibility. That same awareness doesn’t extend to content created by AI tools like ChatGPT. Instead, humans have a well-documented tendency to believe automated tools, a phenomenon known as automation bias. The misinformation coming from AI tools is especially dangerous because it is less likely to be questioned. As Emily Bender, a professor of computational linguistics, summarized, “a system that is right 95% of the time is arguably more dangerous than one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%”.
Anyone using ChatGPT or other AI tools needs to understand that fabricated misinformation, “hallucinations”, are a problem. Beyond a simple technical glitch, hallucinations pose real dangers, from academic missteps to life-threatening medical errors. Fabricated misinformation is just one of the many challenges of living in an AI-infused world.
We have an ethical responsibility to teach students not only how to use AI but also how to critically evaluate AI inputs, processes, and outputs. Educational institutions have the opportunity and the obligation to create courses and initiatives that prepare students to confront the ethical challenges posed by AI, that is why we are currently developing a Center for AI Literacy and Ethics at Oregon State University. It is imperative that educational institutions, not corporations, lead the charge in educating our students about the ethical dimensions and critical use of AI.
Laurie Bridges is an instruction librarian and professor at Oregon State University. She recently taught “Generative AI and Society,” an OSU Honors College colloquium focused on AI literacy and ethics. Laurie Bridges is a Public Voices Fellow of the Op-Ed Project.



















A deep look at how "All in the Family" remains a striking mirror of American politics, class tensions, and cultural manipulation—proving its relevance decades later.
All in This American Family
There are a few shows that have aged as eerily well as All in the Family.
It’s not just that it’s still funny and has the feel not of a sit-com, but of unpretentious, working-class theatre. It’s that, decades later, it remains one of the clearest windows into the American psyche. Archie Bunker’s living room has been, as it were, a small stage on which the country has been working through the same contradictions, anxieties, and unresolved traumas that still shape our politics today. The manipulation of the working class, the pitting of neighbor against neighbor, the scapegoating of the vulnerable, the quiet cruelties baked into everyday life—all of it is still here with us. We like to reassure ourselves that we’ve progressed since the early 1970s, but watching the show now forces an unsettling recognition: The structural forces that shaped Archie’s world have barely budged. The same tactics of distraction and division deployed by elites back then are still deployed now, except more efficiently, more sleekly.
Archie himself is the perfect vessel for this continuity. He is bigoted, blustery, reactive, but he is also wounded, anxious, and constantly misled by forces above and beyond him. Norman Lear created Archie not as a monster to be hated (Lear’s genius was to make Archie lovable despite his loathsome stands), but as a man trapped by the political economy of his era: A union worker who feels his country slipping away, yet cannot see the hands that are actually moving it. His anger leaks sideways, onto immigrants, women, “hippies,” and anyone with less power than he has. The real villains—the wealthy, the connected, the manufacturers of grievance—remain safely and comfortably offscreen. That’s part of the show’s key insight: It reveals how elites thrive by making sure working people turn their frustrations against each other rather than upward.
Edith, often dismissed as naive or scatterbrained, functions as the show’s quiet moral center. Her compassion exposes the emotional void in Archie’s worldview and, in doing so, highlights the costs of the divisions that powerful interests cultivate. Meanwhile, Mike the “Meathead” represents a generation trying to break free from those divisions but often trapped in its own loud self-righteousness. Their clashes are not just family arguments but collisions between competing visions of America’s future. And those visions, tellingly, have yet to resolve themselves.
The political context of the show only sharpens its relevance. Premiering in 1971, All in the Family emerged during the Nixon years, when the “Silent Majority” strategy was weaponizing racial resentment, cultural panic, and working-class anxiety to cement power. Archie was a fictional embodiment of the very demographic Nixon sought to mobilize and manipulate. The show exposed, often bluntly, how economic insecurity was being rerouted into cultural hostility. Watching the show today, it’s impossible to miss how closely that logic mirrors the present, from right-wing media ecosystems to politicians who openly rely on stoking grievances rather than addressing root causes.
What makes the show unsettling today is that its satire feels less like a relic and more like a mirror. The demagogic impulses it spotlighted have simply found new platforms. The working-class anger it dramatized has been harvested by political operatives who, like their 1970s predecessors, depend on division to maintain power. The very cultural debates that fueled Archie’s tirades — about immigration, gender roles, race, and national identity—are still being used as tools to distract from wealth concentration and political manipulation.
If anything, the divisions are sharper now because the mechanisms of manipulation are more sophisticated, for much has been learned by The Machine. The same emotional raw material Lear mined for comedy is now algorithmically optimized for outrage. The same social fractures that played out around Archie’s kitchen table now play out on a scale he couldn’t have imagined. But the underlying dynamics haven’t changed at all.
That is why All in the Family feels so contemporary. The country Lear dissected never healed or meaningfully evolved: It simply changed wardrobe. The tensions, prejudices, and insecurities remain, not because individuals failed to grow but because the economic and political forces that thrive on division have only become more entrenched. Until we confront the political economy that kept Archie and Michael locked in an endless loop of circular bickering, the show will remain painfully relevant for another fifty years.
Ahmed Bouzid is the co-founder of The True Representation Movement.