Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Fighting the Liar’s Dividend: A Toolkit for Truth in the Digital Age

Opinion

Fighting the Liar’s Dividend: A Toolkit for Truth in the Digital Age

In 2023, the RAND Corporation released a study on a phenomenon known as "Truth Decay," where facts become blurred with opinion and spin. But now, people are beginning to doubt everything, including authentic material.

Getty Images, VioletaStoimenova

The Stakes: When Nothing Can Be Trusted

Two weeks before the 2024 election, a fake robocall mimicking President Biden's voice urged voters to skip the New Hampshire primary. According to AP News, it was an instance of AI-enabled election interference. Within hours, thousands had shared it. Each fake like this erodes confidence in the very possibility of knowing what is real.

The RAND Corporation refers to this phenomenon as "Truth Decay," where facts become blurred with opinion and spin. Its 2023 research warns that Truth Decay threatens U.S. national security by weakening military readiness and eroding credibility with allies. But the deeper crisis isn't that people believe every fake—it's that they doubt everything, including authentic material.


What's Really Dividing Us: The Liar's Dividend

Here's what we're missing in the AI deepfake debate: researchers found that "cheap fakes"—misleading cuts, mislabeled clips, or altered speed—were used seven times more often than AI deepfakes in 2024. AI's real danger is the "liar's dividend": the erosion of confidence that any evidence can be trusted.

This loss of shared reality fractures society; climate action stalls when manufactured doubt overwhelms the scientific consensus. Democratic institutions weaken when citizens question basic election facts. Public health suffers when misinformation spreads faster than accurate guidance. Kathleen Hall Jamieson, director of the Annenberg Public Policy Center and co-founder of FactCheck.org, warns that what's at stake is not only accuracy but the very idea that facts matter.

The Psychology Behind Our Vulnerability

Why are we so susceptible? As Daniel Kahneman explained in Thinking, Fast and Slow, our brains default to "System 1" thinking—fast, instinctive, and emotional. This is precisely what disinformation targets. AI-driven lies are designed to trigger immediate emotional reactions —fear, anger, outrage —that bypass our slower, more careful "System 2" thinking. When we're in System 1 mode, we share first and verify later, if at all.

Most of us don't have time for the careful verification that democracy requires. We're sun-dazed and expensively caffeinated, as one democracy researcher puts it, insulated by privilege from the immediate consequences of misinformation—until, suddenly, we're not.

The Verification Toolkit: Four Moves That Work

Digital literacy expert Mike Caulfield developed the SIFT method that anyone can use:

  • Stop before sharing
  • Investigate who is behind the information
  • Find better coverage
  • Trace claims to their origin

Professional fact-checkers practice lateral reading—opening multiple tabs to see what other outlets say about a claim. Tools like AllSides and Ground News help break echo chambers by showing how stories are covered across the political spectrum.

Browser extensions and bias-rating sites such as NewsGuard, Media Bias/Fact Check, and emerging provenance standards like C2PA that aim to certify media authenticity provide additional context.

Beyond Tools: Mental Hygiene for the Digital Age

But technical solutions aren't enough. We need better practices to address the emotional impacts of information overload:

Time-boxing media consumption—checking news at set intervals rather than continuously prevents artificial urgency while improving comprehension.

Diversifying inputs—reading across disciplines, listening to long-form debates such as those at Open to Debate, and seeking perspectives that challenge assumptions.

The 24-hour rule—giving claims time before reacting or sharing prevents emotional manipulation.

Living with uncertainty—perfect information is impossible, but reasonable decisions can still be made with incomplete data. The American Psychological Association documents how unfiltered media exposure contributes to stress and decision fatigue.

Individual and Institutional Responsibility

Cynics argue that personal verification is futile against industrial-scale disinformation. They're half-right—individuals can't solve this alone. But individual action still creates collective defenses when combined with institutional responsibility.

Democracy requires both. Individuals must take responsibility for thoughtful engagement with information, especially when the stakes are high or before sharing widely. Institutions, schools, newsrooms, agencies, and workplaces must treat the risk of misinformation with the same seriousness as cybersecurity.

Communities with strong media literacy programs and diverse information diets tend to be more resistant to manipulation. We need to cultivate a culture where truth-seeking is valued and where we collectively reject the amplification of blatant falsehoods.

The tools exist. The question is whether we'll use them when democracy needs us to.

Democracy requires citizens who can navigate complexity, not retreat from it. Protecting information integrity is now as essential to the survival of democracy as safeguarding elections themselves.

What's Next: Three Immediate Actions

  1. For individuals: Use fact-checking sites (AllSides, Ground News, NewsGuard), practice the SIFT method, try lateral reading, and adopt the mental hygiene solutions—time-boxing news, the 24-hour rule, and living with uncertainty.
  2. For institutions: Implement media literacy programs with the same rigor as cybersecurity training.
  3. For communities: Support local journalism and fact-checking initiatives that serve as shared information infrastructure.

The stakes couldn't be higher. In an age when anyone can manufacture convincing lies, our democracy depends on citizens who choose the harder path of verification over the easier path of confirmation bias.

Edward Saltzberg is the Executive Director of the Security and Sustainability Forum, writes the Stability Brief, and leads a professional education program at George Washington University.


Read More

The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less