Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The Misinformation We’re Missing: Why Real Videos Can Be More Dangerous Than Fake Ones

Opinion

The Misinformation We’re Missing: Why Real Videos Can Be More Dangerous Than Fake Ones

Many assume misinformation requires special effects or technical sophistication. In reality, much of it requires only timing, intent, and a caption.

Getty Images, d3sign

Recently, videos circulated online that appeared to show Los Angeles engulfed in chaos: Marines clashing with protesters, cars ablaze, pallets of bricks staged for violence. The implication was clear, the city had been overtaken by insurrectionists.

The reality was far more contained. Much of the footage was either old, unrelated, or entirely misrepresented. A photo from a Malaysian construction site became “evidence” of a Soros-backed plot. Even a years-old video of burning police cars resurfaced with a new, false label.


This is the oldest trick in the misinformation playbook, and we see it in almost every big breaking news cycle. Use what’s real to distort what’s true. Today, it’s happening faster, fooling more people, and making verification more critical than ever. In our newsroom, we call it the context gap: the space between what a video depicts and what a viewer is led to believe. It’s widening and with it, public trust is eroding.

Authentic, yet misleading

Many assume misinformation requires special effects or technical sophistication. In reality, much of it requires only timing, intent, and a caption.

Take the example of footage of President Zelensky signing artillery shells. The video was genuine, filmed at a U.S. factory during a 2024 visit. It later resurfaced with captions falsely claiming the shells were destined for Israel. No editing was necessary. The clip looked real, because it was. The framing alone changed the meaning.

This is what makes miscontextualized video so effective. It exploits our instinct to trust what we can see. Viewers may never question the footage, let alone realize they’ve been misled. In some cases, the false framing is intentional, what we call disinformation. In others, the content is shared by people who believe it to be true, that’s misinformation. Whether the deception is deliberate or accidental, the impact is the same: a public less certain of what to trust.

The real danger is not just deception, it’s assumption. When authentic content is misused without warning, it reinforces false narratives while bypassing skepticism entirely. The result isn’t just confusion. It’s confidence in the wrong conclusion.

This kind of manipulation shapes perception in ways that are hard to reverse. Even when corrections are issued later, the original framing often sticks, driving division, fueling misinformation cycles, and distorting how people understand events around the world.

Countering that kind of influence requires more than detection software or fact-checking. It requires a shift in mindset: a habit of slowing down, asking questions, and interrogating the frame, not just the footage. In journalism, we call that constructive skepticism, and right now, we need more of it, beyond the newsroom.

Journalistic habits, public value

Journalists are trained to question, verify, and triangulate. Verification is not simply a task, it is a mindset. In our newsroom, this means identifying the original upload, analyzing metadata, and confirming time and place through visual context.

Central to that process is skepticism, not as cynicism, but as discipline. Journalistic skepticism means refusing to take visual evidence at face value. It means asking who benefits, what might be missing just outside the frame, and whether a clip has appeared before under a different guise. It’s not about doubting everything. It’s about demanding enough proof to support trust. These practices may be out of reach for most people. Yet the principles behind them are not.

Skepticism, when grounded in evidence, is one of journalism’s most valuable tools. Applied more widely, it can help the public navigate the flood of video and imagery without falling into conspiracy or confusion.

Anyone can adopt a more careful, skeptical approach by asking a few basic questions: Does this look like what it claims to be? Is the source credible or recognizable? Have I seen this before in a different context? These simple checks do not require technical expertise. A moment of hesitation, scaled across a viewing public, could have changed how the wildfire or Zelensky clips were received and shared.

We do not need every viewer to become a verification expert. What we need is a shift in mindset: toward curiosity, caution, and context.

The context gap is widening

Journalists must continue to treat visual content as seriously as any other source, verifying not just whether something is real, but when and where it was captured, who posted it, and whether it is being presented truthfully.

However, the responsibility does not end there.

Platforms also have a role to play. When old or previously viral footage resurfaces during breaking news, platforms should add the necessary context proactively, not just rely on users to act as de facto fact-checkers. Many major platforms already have content detection systems in place to catch copyright infringements.

The public must also be equipped to pause, question, and seek clarity. Educators and institutions can help foster these habits of verification alongside broader digital access. One of the most overlooked tools in fighting misinformation is not software, but curriculum, training people to interpret the media landscape around them.

We live in an age when anyone can broadcast to the world, with algorithms built to capture and keep attention regardless of veracity or intent. The context gap is not narrowing. It is widening. Every swipe, share, and caption now plays a role in shaping how we see the world—and how others see it too. Slowing down to ask “why this, why now?” may be one of the most powerful tools we have.

James Law is the editor-in-chief at Storyful.

Read More

An illustration of orange-colored megaphones, one megaphone in the middle is red and facing the opposite direction of the others.

A growing crisis threatens U.S. public data. Experts warn disappearing federal datasets could undermine science, policy, and democracy—and outline a plan to protect them.

Getty Images, Richard Drury

America's Data Crisis: Saving Trusted Facts Is Essential to Democracy

In March 2026, more than a hundred information and data experts gathered in a converted Christian Science church to confront a problem most Americans never see, but that shapes nearly every public debate we have. The nonprofit Internet Archive convened this national Information Stewardship Forum at their San Francisco headquarters because something fundamental is breaking: the country’s shared foundation of facts.

For decades, the United States has relied on a vast ecosystem of federal data on health, climate, the economy, education, demographics, scientific research, and more. This data is the backbone of journalism, policymaking, scientific discovery, and public accountability. It is how we know whether the air is safe to breathe, whether unemployment is rising or falling, whether a new disease is spreading, or whether a community is being left behind.

Keep ReadingShow less
Man lying in his bed, on his phone at night.

As the 2026 election approaches, doomscrolling and social media are shaping voter behavior through fear and anxiety. Learn how digital news consumption influences political decisions—and how to break the cycle for more informed voting.

Getty Images, gorodenkoff

Americans Are Doomscrolling Their Way to the Ballot Box and Only Getting Empty Promises

As the spring primary cycle ramps up, voters are deciding which candidates to elect in the November general election, but too much doomscrolling on social media is leading to uninformed — and often anxiety-based — voting. Even though online platforms and politicians may be preying on our exhaustion to further their agendas, we don’t have to fall for it this election cycle.

Doomscrolling is, unfortunately, part of daily life for many of us. It involves consuming a virtually endless amount of negative social media posts and news content, causing us to feel scared and depressed. Our brains have a hardwired negativity bias that causes us to notice potential threats and focus on them. This is exacerbated by the fact that people who closely follow or participate in politics are more likely to doomscroll.

Keep ReadingShow less
The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less