Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Readers trust journalists less when they debunk rather than confirm claims

Woman looking off into the distance while holding her mobile phone

Seeing a lie or error corrected can make some people more skeptical of the fact-checker.

FG Trade/Getty Inages

Stein is an associate professor of marketing at California State Polytechnic University, Pomona. Meyersohn is pursuing an Ed.S. in school psychology California State University, Long Beach.

Pointing out that someone else is wrong is a part of life. And journalists need to do this all the time – their job includes helping sort what’s true from what’s not. But what if people just don’t like hearing corrections?

Our new research, published in the journal Communication Research, suggests that’s the case. In two studies, we found that people generally trust journalists when they confirm claims to be true but are more distrusting when journalists correct false claims.


Some linguistics and social science theories suggest that people intuitively understand social expectations not to be negative. Being disagreeable, like when pointing out someone else’s lie or error, carries with it a risk of backlash.

We reasoned that it follows that corrections are held to a different, more critical standard than confirmations. Attempts to debunk can trigger doubts about journalists’ honesty and motives. In other words, if you’re providing a correction, you’re being a bit of a spoilsport, and that could negatively affect how you are viewed.

How we did our work

Using real articles, we investigated how people feel about journalists who provide “fact checks.”

In our first study, participants read a detailed fact check that either corrected or confirmed some claim related to politics or economics. For instance, one focused on the statement, “Congressional salaries have gone up 231% in the past 30 years,” which is false. We then asked participants about how they were evaluating the fact check and the journalist who wrote it.

Although people were fairly trusting of the journalists in general, more people expressed suspicions toward journalists providing corrections than those providing confirmations. People were less likely to be skeptical of confirmatory fact checks than they were of debunking articles, with the percentage of respondents expressing strong distrust doubling from about 10% to about 22%.

People also said they needed more information to know whether journalists debunking statements were telling the truth, compared with their assessment of journalists who were confirming claims.

In a second study, we presented marketing claims that ultimately proved to be true or false. For example, some participants read an article about a brand that said its cooking hacks would save time, but they didn’t actually work. Others read an article about a brand providing cooking hacks that turned about to be genuine.

Again, across several types of products, people thought they needed more evidence in order to believe articles pointing out falsehoods, and they reported distrusting correcting journalists more.

Why it matters

Correcting misinformation is notoriously difficult, as researchers and journalists have found out. The United States is also experiencing a decadeslong decline of trust in journalism. Fact-checking tries to help combat misinformation and disinformation, but our research suggests that there are limits to how much it helps. Providing a debunking might make journalists seem like they’re just being negative.

Our second study also explains a slice of pop culture: the backlash on someone who reveals the misdeeds of another. For example, if you read an article pointing out that a band lied about their origin story, you might notice it seems to create a sub-controversy in the comments of people angry that anyone was called out at all, even correctly. This scenario is exactly what we’d expect if corrections are automatically scrutinized and distrusted by some people.

What’s next

Future work can explore how journalists can be transparent without undermining trust. It’s reasonable to assume that people will trust a journalist more if they explain how they came to a particular conclusion. However, according to our results, that’s not quite the case. Rather, trust is contingent on what the conclusion is.

People in our studies were quite trusting of journalists when they provided confirmations. And, certainly, people are sometimes fine with corrections, as when outlandish misinformation they already disbelieve is debunked. The challenge for journalists may be figuring out how to provide debunkings without seeming like a debunker.

The Research Brief is a short take on interesting academic work.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Read More

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less
Meta Undermining Trust but Verify through Paid Links
Facebook launches voting resource tool
Facebook launches voting resource tool

Meta Undermining Trust but Verify through Paid Links

Facebook is testing limits on shared external links, which would become a paid feature through their Meta Verified program, which costs $14.99 per month.

This change solidifies that verification badges are now meaningless signifiers. Yet it wasn’t always so; the verified internet was built to support participation and trust. Beginning with Twitter’s verification program launched in 2009, a checkmark next to a username indicated that an account had been verified to represent a notable person or official account for a business. We could believe that an elected official or a brand name was who they said they were online. When Twitter Blue, and later X Premium, began to support paid blue checkmarks in November of 2022, the visual identification of verification became deceptive. Think Fake Eli Lilly accounts posting about free insulin and impersonation accounts for Elon Musk himself.

This week’s move by Meta echoes changes at Twitter/X, despite the significant evidence that it leaves information quality and user experience in a worse place than before. Despite what Facebook says, all this tells anyone is that you paid.

Keep ReadingShow less
artificial intelligence

Rather than blame AI for young Americans struggling to find work, we need to build: build new educational institutions, new retraining and upskilling programs, and, most importantly, new firms.

Surasak Suwanmake/Getty Images

Blame AI or Build With AI? Only One Approach Creates Jobs

We’re failing young Americans. Many of them are struggling to find work. Unemployment among 16- to 24-year-olds topped 10.5% in August. Even among those who do find a job, many of them are settling for lower-paying roles. More than 50% of college grads are underemployed. To make matters worse, the path forward to a more stable, lucrative career is seemingly up in the air. High school grads in their twenties find jobs at nearly the same rate as those with four-year degrees.

We have two options: blame or build. The first involves blaming AI, as if this new technology is entirely to blame for the current economic malaise facing Gen Z. This course of action involves slowing or even stopping AI adoption. For example, there’s so-called robot taxes. The thinking goes that by placing financial penalties on firms that lean into AI, there will be more roles left to Gen Z and workers in general. Then there’s the idea of banning or limiting the use of AI in hiring and firing decisions. Applicants who have struggled to find work suggest that increased use of AI may be partially at fault. Others have called for providing workers with a greater say in whether and to what extent their firm uses AI. This may help firms find ways to integrate AI in a way that augments workers rather than replace them.

Keep ReadingShow less