Skip to content
Search

Latest Stories

Top Stories

Verifying facts in the age of AI – librarians offer 5 strategies

Woman looking at  a computer

The internet is awash in fake news articles and misinformation.

franz12/Getty Images

Bicknell-Holmes is a library professor at Boise State University. Watson is a librarian and associate professor at Boise State University. Cordova is a library associate professor at Boise State University

The phenomenal growth in artificial intelligence tools has made it easy to create a story quickly, complicating a reader’s ability to determine if a news source or article is truthful or reliable. For instance, earlier this year, people were sharing an article about the supposed suicide of Israeli Prime Minister Benjamin Netanyahu’s psychiatrist as if it were real. It ended up being an AI-generated rewrite of a satirical piece from 2010.

The problem is widespread. According to a 2021 Pearson Institute/AP-NORC poll, “Ninety-five percent of Americans believe the spread of misinformation is a problem.” The Pearson Institute researches methods to reduce global conflicts.


As library scientists, we combat the increase in misinformation by teaching a number of ways to validate the accuracy of an article. These methods include the SIFT Method (Stop, Investigate, Find, Trace), the P.R.O.V.E.N. Source Evaluation method (Purpose, Relevance, Objectivity, Verifiability, Expertise and Newness), and lateral reading.

Lateral reading is a strategy for investigating a source by opening a new browser tab to conduct a search and consult other sources. Lateral reading involves cross-checking the information by researching the source rather than scrolling down the page.

Sign up for The Fulcrum newsletter

Here are five techniques based on these methods to help readers determine news facts from fiction:

1. Research the author or organization

Search for information beyond the entity’s own website. What are others saying about it? Are there any red flags that lead you to question its credibility? Search the entity’s name in quotation marks in your browser and look for sources that critically review the organization or group. An organization’s “About” page might tell you who is on their board, their mission and their nonprofit status, but this information is typically written to present the organization in a positive light.

The P.R.O.V.E.N. Source Evaluation method includes a section called “Expertise,” which recommends that readers check the author’s credentials and affiliations. Do the authors have advanced degrees or expertise related to the topic? What else have they written? Who funds the organization and what are their affiliations? Do any of these affiliations reveal a potential conflict of interest? Might their writings be biased in favor of one particular viewpoint?

If any of this information is missing or questionable, you may want to stay away from this author or organization.

2. Use good search techniques

Become familiar with search techniques available in your favorite web browser, such as searching keywords rather than full sentences and limiting searches by domain names, such as .org, .gov, or .edu.

Another good technique is putting two or more words in quotation marks so the search engine finds the words next to each other in that order, such as “Pizzagate conspiracy.” This leads to more relevant results.

In an article published in Nature, a team of researchers wrote that “77% of search queries that used the headline or URL of a false/misleading article as a search query return at least one unreliable news link among the top ten results.”

A more effective search would be to identify the key concepts in the headline in question and search those individual words as keywords. For example, if the headline is “Video Showing Alien at Miami Mall Sparks Claims of Invasion,” readers could search: “Alien invasion” Miami mall.

3. Verify the source

Verify the original sources of the information. Was the information cited, paraphrased or quoted accurately? Can you find the same facts or statements in the original source? Purdue Global, Purdue University’s online university for working adults, recommends verifying citations and references that can also apply to news stories by checking that the sources are “easy to find, easy to access, and not outdated.” It also recommends checking the original studies or data cited for accuracy.

The SIFT Method echoes this in its recommendation to “trace claims, quotes, and media to the original context.” You cannot assume that re-reporting is always accurate.

4. Use fact-checking websites

Search fact-checking websites such as InfluenceWatch.org, Poynter.org, Politifact.com or Snopes.com to verify claims. What conclusions did the fact-checkers reach about the accuracy of the claims?

A Harvard Kennedy School Misinformation Review article found that the “high level of agreement” between fact-checking sites “ enhances the credibility of fact checkers in the eyes of the public.”

5. Pause and reflect

Pause and reflect to see if what you have read has triggered a strong emotional response. An article in the journal Cognitive Research indicates that news items that cause strong emotions increase our tendency “to believe fake news stories.”

One online study found that the simple act of “pausing to think” and reflect on whether a headline is true or false may prevent a person from sharing false information. While the study indicated that pausing only decreases intentions to share by a small amount – 0.32 points on a 6-point scale – the authors argue that this could nonetheless cut down on the spread of fake news on social media.

Knowing how to identify and check for misinformation is an important part of being a responsible digital citizen. This skill is all the more important as AI becomes more prevalent. The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

Should States Regulate AI?

Rep. Jay Obernolte, R-CA, speaks at an AI conference on Capitol Hill with experts

Provided

Should States Regulate AI?

WASHINGTON —- As House Republicans voted Thursday to pass a 10-year moratorium on AI regulation by states, Rep. Jay Obernolte, R-CA, and AI experts said the measure would be necessary to ensure US dominance in the industry.

“We want to make sure that AI continues to be led by the United States of America, and we want to make sure that our economy and our society realizes the potential benefits of AI deployment,” Obernolte said.

Keep ReadingShow less
The AI Race We Need: For a Better Future, Not Against Another Nation

The concept of AI hovering among the public.

Getty Images, J Studios

The AI Race We Need: For a Better Future, Not Against Another Nation

The AI race that warrants the lion’s share of our attention and resources is not the one with China. Both superpowers should stop hurriedly pursuing AI advances for the sake of “beating” the other. We’ve seen such a race before. Both participants lose. The real race is against an unacceptable status quo: declining lifespans, increasing income inequality, intensifying climate chaos, and destabilizing politics. That status quo will drag on, absent the sorts of drastic improvements AI can bring about. AI may not solve those problems but it may accelerate our ability to improve collective well-being. That’s a race worth winning.

Geopolitical races have long sapped the U.S. of realizing a better future sooner. The U.S. squandered scarce resources and diverted talented staff to close the alleged missile gap with the USSR. President Dwight D. Eisenhower rightfully noted, “Every gun that is made, every warship launched, every rocket fired signifies, in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed.” He realized that every race comes at an immense cost. In this case, the country was “spending the sweat of its laborers, the genius of its scientists, the hopes of its children.”

Keep ReadingShow less
Closeup of Software engineering team engaged in problem-solving and code analysis

Closeup of Software engineering team engaged in problem-solving and code analysis.

Getty Images, MTStock Studio

AI Is Here. Our Laws Are Stuck in the Past.

Artificial intelligence (AI) promises a future once confined to science fiction: personalized medicine accounting for your specific condition, accelerated scientific discovery addressing the most difficult challenges, and reimagined public education designed around AI tutors suited to each student's learning style. We see glimpses of this potential on a daily basis. Yet, as AI capabilities surge forward at exponential speed, the laws and regulations meant to guide them remain anchored in the twentieth century (if not the nineteenth or eighteenth!). This isn't just inefficient; it's dangerously reckless.

For too long, our approach to governing new technologies, including AI, has been one of cautious incrementalism—trying to fit revolutionary tools into outdated frameworks. We debate how century-old privacy torts apply to vast AI training datasets, how liability rules designed for factory machines might cover autonomous systems, or how copyright law conceived for human authors handles AI-generated creations. We tinker around the edges, applying digital patches to analog laws.

Keep ReadingShow less
Nurturing the Next Generation of Journalists
man using MacBook Air

Nurturing the Next Generation of Journalists

“Student journalists are uniquely positioned to take on the challenges of complicating the narrative about how we see each other, putting forward new solutions to how we can work together and have dialogue across difference,” said Maxine Rich, the Program Manager with Common Ground USA. I had the chance to interview her earlier this year about Common Ground Journalism, a new initiative to support students reporting in contentious times.

A partnership with The Fulcrum and the Latino News Network (LNN), I joined Maxine and Nicole Donelan, Program Assistant with Common Ground USA, as co-instructor of the first Common Ground Journalism cohort, which ran for six weeks between January and March 2025.

Keep ReadingShow less