Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Readers trust journalists less when they debunk rather than confirm claims

Woman looking off into the distance while holding her mobile phone

Seeing a lie or error corrected can make some people more skeptical of the fact-checker.

FG Trade/Getty Inages

Stein is an associate professor of marketing at California State Polytechnic University, Pomona. Meyersohn is pursuing an Ed.S. in school psychology California State University, Long Beach.

Pointing out that someone else is wrong is a part of life. And journalists need to do this all the time – their job includes helping sort what’s true from what’s not. But what if people just don’t like hearing corrections?

Our new research, published in the journal Communication Research, suggests that’s the case. In two studies, we found that people generally trust journalists when they confirm claims to be true but are more distrusting when journalists correct false claims.


Some linguistics and social science theories suggest that people intuitively understand social expectations not to be negative. Being disagreeable, like when pointing out someone else’s lie or error, carries with it a risk of backlash.

We reasoned that it follows that corrections are held to a different, more critical standard than confirmations. Attempts to debunk can trigger doubts about journalists’ honesty and motives. In other words, if you’re providing a correction, you’re being a bit of a spoilsport, and that could negatively affect how you are viewed.

How we did our work

Using real articles, we investigated how people feel about journalists who provide “fact checks.”

In our first study, participants read a detailed fact check that either corrected or confirmed some claim related to politics or economics. For instance, one focused on the statement, “Congressional salaries have gone up 231% in the past 30 years,” which is false. We then asked participants about how they were evaluating the fact check and the journalist who wrote it.

Although people were fairly trusting of the journalists in general, more people expressed suspicions toward journalists providing corrections than those providing confirmations. People were less likely to be skeptical of confirmatory fact checks than they were of debunking articles, with the percentage of respondents expressing strong distrust doubling from about 10% to about 22%.

People also said they needed more information to know whether journalists debunking statements were telling the truth, compared with their assessment of journalists who were confirming claims.

In a second study, we presented marketing claims that ultimately proved to be true or false. For example, some participants read an article about a brand that said its cooking hacks would save time, but they didn’t actually work. Others read an article about a brand providing cooking hacks that turned about to be genuine.

Again, across several types of products, people thought they needed more evidence in order to believe articles pointing out falsehoods, and they reported distrusting correcting journalists more.

Why it matters

Correcting misinformation is notoriously difficult, as researchers and journalists have found out. The United States is also experiencing a decadeslong decline of trust in journalism. Fact-checking tries to help combat misinformation and disinformation, but our research suggests that there are limits to how much it helps. Providing a debunking might make journalists seem like they’re just being negative.

Our second study also explains a slice of pop culture: the backlash on someone who reveals the misdeeds of another. For example, if you read an article pointing out that a band lied about their origin story, you might notice it seems to create a sub-controversy in the comments of people angry that anyone was called out at all, even correctly. This scenario is exactly what we’d expect if corrections are automatically scrutinized and distrusted by some people.

What’s next

Future work can explore how journalists can be transparent without undermining trust. It’s reasonable to assume that people will trust a journalist more if they explain how they came to a particular conclusion. However, according to our results, that’s not quite the case. Rather, trust is contingent on what the conclusion is.

People in our studies were quite trusting of journalists when they provided confirmations. And, certainly, people are sometimes fine with corrections, as when outlandish misinformation they already disbelieve is debunked. The challenge for journalists may be figuring out how to provide debunkings without seeming like a debunker.

The Research Brief is a short take on interesting academic work.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Read More

Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less