Skip to content
Search

Latest Stories

Top Stories

Verifying facts in the age of AI – librarians offer 5 strategies

Woman looking at  a computer

The internet is awash in fake news articles and misinformation.

franz12/Getty Images

Bicknell-Holmes is a library professor at Boise State University. Watson is a librarian and associate professor at Boise State University. Cordova is a library associate professor at Boise State University

The phenomenal growth in artificial intelligence tools has made it easy to create a story quickly, complicating a reader’s ability to determine if a news source or article is truthful or reliable. For instance, earlier this year, people were sharing an article about the supposed suicide of Israeli Prime Minister Benjamin Netanyahu’s psychiatrist as if it were real. It ended up being an AI-generated rewrite of a satirical piece from 2010.

The problem is widespread. According to a 2021 Pearson Institute/AP-NORC poll, “Ninety-five percent of Americans believe the spread of misinformation is a problem.” The Pearson Institute researches methods to reduce global conflicts.


As library scientists, we combat the increase in misinformation by teaching a number of ways to validate the accuracy of an article. These methods include the SIFT Method (Stop, Investigate, Find, Trace), the P.R.O.V.E.N. Source Evaluation method (Purpose, Relevance, Objectivity, Verifiability, Expertise and Newness), and lateral reading.

Lateral reading is a strategy for investigating a source by opening a new browser tab to conduct a search and consult other sources. Lateral reading involves cross-checking the information by researching the source rather than scrolling down the page.

Here are five techniques based on these methods to help readers determine news facts from fiction:

1. Research the author or organization

Search for information beyond the entity’s own website. What are others saying about it? Are there any red flags that lead you to question its credibility? Search the entity’s name in quotation marks in your browser and look for sources that critically review the organization or group. An organization’s “About” page might tell you who is on their board, their mission and their nonprofit status, but this information is typically written to present the organization in a positive light.

The P.R.O.V.E.N. Source Evaluation method includes a section called “Expertise,” which recommends that readers check the author’s credentials and affiliations. Do the authors have advanced degrees or expertise related to the topic? What else have they written? Who funds the organization and what are their affiliations? Do any of these affiliations reveal a potential conflict of interest? Might their writings be biased in favor of one particular viewpoint?

If any of this information is missing or questionable, you may want to stay away from this author or organization.

2. Use good search techniques

Become familiar with search techniques available in your favorite web browser, such as searching keywords rather than full sentences and limiting searches by domain names, such as .org, .gov, or .edu.

Another good technique is putting two or more words in quotation marks so the search engine finds the words next to each other in that order, such as “ Pizzagate conspiracy.” This leads to more relevant results.

In an article published in Nature, a team of researchers wrote that “77% of search queries that used the headline or URL of a false/misleading article as a search query return at least one unreliable news link among the top ten results.”

A more effective search would be to identify the key concepts in the headline in question and search those individual words as keywords. For example, if the headline is “Video Showing Alien at Miami Mall Sparks Claims of Invasion,” readers could search: “Alien invasion” Miami mall.

3. Verify the source

Verify the original sources of the information. Was the information cited, paraphrased or quoted accurately? Can you find the same facts or statements in the original source? Purdue Global, Purdue University’s online university for working adults, recommends verifying citations and references that can also apply to news stories by checking that the sources are “easy to find, easy to access, and not outdated.” It also recommends checking the original studies or data cited for accuracy.

The SIFT Method echoes this in its recommendation to “trace claims, quotes, and media to the original context.” You cannot assume that re-reporting is always accurate.

4. Use fact-checking websites

Search fact-checking websites such as InfluenceWatch.org, Poynter.org, Politifact.com or Snopes.com to verify claims. What conclusions did the fact-checkers reach about the accuracy of the claims?

A Harvard Kennedy School Misinformation Review article found that the “high level of agreement” between fact-checking sites “ enhances the credibility of fact checkers in the eyes of the public.”

5. Pause and reflect

Pause and reflect to see if what you have read has triggered a strong emotional response. An article in the journal Cognitive Research indicates that news items that cause strong emotions increase our tendency “to believe fake news stories.”

One online study found that the simple act of “pausing to think” and reflect on whether a headline is true or false may prevent a person from sharing false information. While the study indicated that pausing only decreases intentions to share by a small amount – 0.32 points on a 6-point scale – the authors argue that this could nonetheless cut down on the spread of fake news on social media.

Knowing how to identify and check for misinformation is an important part of being a responsible digital citizen. This skill is all the more important as AI becomes more prevalent. The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

Entertainment Can Improve How Democrats and Republicans See Each Other

Since the development of American mass media culture in the mid-20th century, numerous examples of entertainment media have tried to improve attitudes towards those who have traditionally held little power.

Getty Images, skynesher

Entertainment Can Improve How Democrats and Republicans See Each Other

Entertainment has been used for decades to improve attitudes toward other groups, both in the U.S. and abroad. One can think of movies like Guess Who's Coming to Dinner, helping change attitudes toward Black Americans, or TV shows like Rosanne, helping humanize the White working class. Efforts internationally show that media can sometimes improve attitudes toward two groups concurrently.

Substantial research shows that Americans now hold overly negative views of those across the political spectrum. Let's now learn from decades of experience using entertainment to improve attitudes of those in other groups—but also from counter-examples that have reinforced stereotypes and whose techniques should generally be avoided—in order to improve attitudes toward fellow Americans across politics. This entertainment can allow Americans across the political spectrum to have more accurate views of each other while realizing that successful cross-ideological friendships and collaborations are possible.

Keep ReadingShow less
Congress Must Not Undermine State Efforts To Regulate AI Harms to Children
Congress Must Not Undermine State Efforts To Regulate AI Harms to Children
Getty Images, Dmytro Betsenko

Congress Must Not Undermine State Efforts To Regulate AI Harms to Children

A cornerstone of conservative philosophy is that policy decisions should generally be left to the states. Apparently, this does not apply when the topic is artificial intelligence (AI).

In the name of promoting innovation, and at the urging of the tech industry, Congress quietly included in a 1,000-page bill a single sentence that has the power to undermine efforts to protect against the dangers of unfettered AI development. The sentence imposes a ten-year ban on state regulation of AI, including prohibiting the enforcement of laws already on the books. This brazen approach crossed the line even for conservative U.S. Representative Marjorie Taylor Greene, who remarked, “We have no idea what AI will be capable of in the next 10 years, and giving it free rein and tying states' hands is potentially dangerous.” She’s right. And it is especially dangerous for children.

Keep ReadingShow less
Microphones, podcast set up, podcast studio.

Many people inside and outside of the podcasting world are working to use the medium as a way to promote democracy and civic engagement.

Getty Images, Sergey Mironov

Ben Rhodes on How Podcasts Can Strengthen Democracy

After the 2024 election was deemed the “podcast election,” many people inside and outside of the podcasting world were left wondering how to capitalize on the medium as a way to promote democracy and civic engagement to audiences who are either burned out by or distrustful of traditional or mainstream news sources.

The Democracy Group podcast network has been working through this question since its founding in 2020—long before presidential candidates appeared on some of the most popular podcasts to appeal to specific demographics. Our members recently met in Washington, D.C., for our first convening to learn from each other and from high-profile podcasters like Jessica Tarlov, host of Raging Moderates, and Ben Rhodes, host of Pod Save the World.

Keep ReadingShow less
True Confessions of an AI Flip Flopper
Ai technology, Artificial Intelligence. man using technology smart robot AI, artificial intelligence by enter command prompt for generates something, Futuristic technology transformation.
Getty Images - stock photo

True Confessions of an AI Flip Flopper

A few years ago, I would have agreed with the argument that the most important AI regulatory issue is mitigating the low probability of catastrophic risks. Today, I’d think nearly the opposite. My primary concern is that we will fail to realize the already feasible and significant benefits of AI. What changed and why do I think my own evolution matters?

Discussion of my personal path from a more “safety” oriented perspective to one that some would label as an “accelerationist” view isn’t important because I, Kevin Frazier, have altered my views. The point of walking through my pivot is instead valuable because it may help those unsure of how to think about these critical issues navigate a complex and, increasingly, heated debate. By sharing my own change in thought, I hope others will feel welcomed to do two things: first, reject unproductive, static labels that are misaligned with a dynamic technology; and, second, adjust their own views in light of the wide variety of shifting variables at play when it comes to AI regulation. More generally, I believe that calling myself out for a so-called “flip-flop” may give others more leeway to do so without feeling like they’ve committed some wrong.

Keep ReadingShow less