Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Verifying facts in the age of AI – librarians offer 5 strategies

Woman looking at  a computer

The internet is awash in fake news articles and misinformation.

franz12/Getty Images

Bicknell-Holmes is a library professor at Boise State University. Watson is a librarian and associate professor at Boise State University. Cordova is a library associate professor at Boise State University

The phenomenal growth in artificial intelligence tools has made it easy to create a story quickly, complicating a reader’s ability to determine if a news source or article is truthful or reliable. For instance, earlier this year, people were sharing an article about the supposed suicide of Israeli Prime Minister Benjamin Netanyahu’s psychiatrist as if it were real. It ended up being an AI-generated rewrite of a satirical piece from 2010.

The problem is widespread. According to a 2021 Pearson Institute/AP-NORC poll, “Ninety-five percent of Americans believe the spread of misinformation is a problem.” The Pearson Institute researches methods to reduce global conflicts.


As library scientists, we combat the increase in misinformation by teaching a number of ways to validate the accuracy of an article. These methods include the SIFT Method (Stop, Investigate, Find, Trace), the P.R.O.V.E.N. Source Evaluation method (Purpose, Relevance, Objectivity, Verifiability, Expertise and Newness), and lateral reading.

Lateral reading is a strategy for investigating a source by opening a new browser tab to conduct a search and consult other sources. Lateral reading involves cross-checking the information by researching the source rather than scrolling down the page.

Here are five techniques based on these methods to help readers determine news facts from fiction:

1. Research the author or organization

Search for information beyond the entity’s own website. What are others saying about it? Are there any red flags that lead you to question its credibility? Search the entity’s name in quotation marks in your browser and look for sources that critically review the organization or group. An organization’s “About” page might tell you who is on their board, their mission and their nonprofit status, but this information is typically written to present the organization in a positive light.

The P.R.O.V.E.N. Source Evaluation method includes a section called “Expertise,” which recommends that readers check the author’s credentials and affiliations. Do the authors have advanced degrees or expertise related to the topic? What else have they written? Who funds the organization and what are their affiliations? Do any of these affiliations reveal a potential conflict of interest? Might their writings be biased in favor of one particular viewpoint?

If any of this information is missing or questionable, you may want to stay away from this author or organization.

2. Use good search techniques

Become familiar with search techniques available in your favorite web browser, such as searching keywords rather than full sentences and limiting searches by domain names, such as .org, .gov, or .edu.

Another good technique is putting two or more words in quotation marks so the search engine finds the words next to each other in that order, such as “ Pizzagate conspiracy.” This leads to more relevant results.

In an article published in Nature, a team of researchers wrote that “77% of search queries that used the headline or URL of a false/misleading article as a search query return at least one unreliable news link among the top ten results.”

A more effective search would be to identify the key concepts in the headline in question and search those individual words as keywords. For example, if the headline is “Video Showing Alien at Miami Mall Sparks Claims of Invasion,” readers could search: “Alien invasion” Miami mall.

3. Verify the source

Verify the original sources of the information. Was the information cited, paraphrased or quoted accurately? Can you find the same facts or statements in the original source? Purdue Global, Purdue University’s online university for working adults, recommends verifying citations and references that can also apply to news stories by checking that the sources are “easy to find, easy to access, and not outdated.” It also recommends checking the original studies or data cited for accuracy.

The SIFT Method echoes this in its recommendation to “trace claims, quotes, and media to the original context.” You cannot assume that re-reporting is always accurate.

4. Use fact-checking websites

Search fact-checking websites such as InfluenceWatch.org, Poynter.org, Politifact.com or Snopes.com to verify claims. What conclusions did the fact-checkers reach about the accuracy of the claims?

A Harvard Kennedy School Misinformation Review article found that the “high level of agreement” between fact-checking sites “ enhances the credibility of fact checkers in the eyes of the public.”

5. Pause and reflect

Pause and reflect to see if what you have read has triggered a strong emotional response. An article in the journal Cognitive Research indicates that news items that cause strong emotions increase our tendency “to believe fake news stories.”

One online study found that the simple act of “pausing to think” and reflect on whether a headline is true or false may prevent a person from sharing false information. While the study indicated that pausing only decreases intentions to share by a small amount – 0.32 points on a 6-point scale – the authors argue that this could nonetheless cut down on the spread of fake news on social media.

Knowing how to identify and check for misinformation is an important part of being a responsible digital citizen. This skill is all the more important as AI becomes more prevalent. The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Read More

Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate
the letters are made up of different colors

Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate

This nonpartisan policy brief, written by an ACE fellow, is republished by The Fulcrum as part of our partnership with the Alliance for Civic Engagement and our NextGen initiative — elevating student voices, strengthening civic education, and helping readers better understand democracy and public policy.

Key takeaways

  • The U.S. has no national AI liability law. Instead, a patchwork of state laws has emerged which has resulted in legal protections being dependent on where an individual resides.
  • It’s often unclear who is legally responsible when AI causes harm. This gap leaves many people with no clear path to seek help.
  • In March 2026, the White House and Congress introduced major proposals to establish a federal standard, but there is significant disagreement about whether that standard should prioritize protecting innovation or protecting people harmed by AI systems.

Background: A Patchwork of State Laws

Without a national AI law, states have been filling in the gaps on their own. The result is an uneven landscape where a person’s legal protections depend entirely on which state they live in.

Keep ReadingShow less
Teenager admiring electronic hobby robot.

Explore how China is overtaking the U.S. in the global innovation race, from electric vehicles to advanced research, and why America’s fragmented science policy, talent loss, and weak industrial strategy threaten its technological leadership.

Getty Images, Willie B. Thomas

America’s Greatest Geopolitical Blind Spot

The global hierarchy of innovation is undergoing a structural shift that Washington is dangerously slow to acknowledge. For decades, the prevailing narrative in the United States was that China was merely the "world’s factory"—a nation capable of mass-producing Western designs but inherently lacking the creative spark to invent its own. This assumption has been shattered. Today, Beijing is no longer playing catch-up; in sectors ranging from electric vehicles and next-generation nuclear power to hypersonic missiles, China is setting the pace.

The central challenge is that China has mastered the entire innovation ecosystem, while the United States has allowed its own to fracture. Innovation is not just about a "eureka" moment in a laboratory; it is a relay race that begins with basic scientific research, moves through the training of specialized talent, and ends with the large-scale commercialization of "hard tech." China is currently winning every leg of that race.

Keep ReadingShow less
An illustration of a person standing alone on a platform and looking at speech bubbles.

A bold critique of modern democracy and rising authoritarian ideas, exploring how AI-powered swarm digital democracy could redefine participation and governance.

Getty Images, Andriy Onufriyenko

The Only Radical Move Forward: Swarm Digital Democracy

We are increasingly told that democracy has failed and that its time has passed. The evidence proffered is everywhere, we are told: Gridlock, captured institutions, performative elections, a public that senses, correctly, that its voice rarely translates into real power. Into this vacuum step dystopic movements like the Dark Enlightenment and harder strains of Right-wing populism, offering a stark diagnosis and an even starker cure: Abandon the illusion of popular rule and return to forms of authority that are decisive, hierarchical, and unapologetically exclusionary. They present themselves as bold, clear-eyed, rambunctious, alive, and willing to act where others hesitate. And all to save the world from itself.

But this framing depends on a sleight of hand: It assumes that what we have been living under is, in fact, democracy, and that its failures are the failures of democracy itself. That is the first mistake.

Keep ReadingShow less
Judge's Gavel Hammer as a Symbol of Law and Order with Processor CPU AI Chip.

Elon Musk’s xAI company is challenging AI regulations in Colorado after losing in California, arguing that limits on artificial intelligence violate free speech. As Connecticut enforces its own AI law, this case could shape the future of AI regulation, corporate accountability, and constitutional rights in the United States.

Getty Images, Alexander Sikov

xAI Pushes Free Speech Theory Into New AI Lawsuits

Elon Musk's AI company, xAI, is on a legal road trip. After losing in California, it filed suit in Colorado asking a court to declare the state's artificial intelligence regulations unconstitutional. The argument is essentially the same one that already failed. Meet the new boss. Same as the old boss.

For Connecticut residents, this is not just the next state in the alphabet that has passed AI legislation. Connecticut was one of the first states in the nation to adopt an AI law, requiring companies to disclose when AI is being used in critical decisions like employment, housing, credit, or healthcare. That law is already drawing scrutiny from the technology industry. What xAI tried to do in California and now in Colorado is a preview of what we may face in Connecticut.

Keep ReadingShow less