Skip to content
Search

Latest Stories

Top Stories

Dealing with false facts: How to correct online misinformation

Opinion

Side-by-side images, one with a computer overlay

A comparison of an original and deepfake video of Facebook CEO Mark Zuckerberg.

Elyse Samuels/The Washington Post via Getty Images

Sanfilippo is an assistant professor in the School of Information Sciences at the University of Illinois Urbana-Champaign and book series editor for Cambridge Studies on Governing Knowledge Commons. She is a public voices fellow of The OpEd Project.

Deepfakes of celebrities and misinformation about public figures might not be new in 2024, but they are more common and many people seem to grow ever more resigned that they are inevitable.

The problems posed by false online content extend far beyond public figures, impacting everyone, including youth.


New York Mayor Eric Adams in a recent press conference emphasized that many depend on platforms to fix these problems, but that parents, voters and policymakers need to take action. “These companies are well aware that negative, frightening and outrageous content generates continued engagement and greater revenue,” Adams said.

Recent efforts by Taylor Swift’s fans, coordinated via #ProtectTaylorSwift, to take down, bury, and correct fake and obscene content about her are a welcome and hopeful story about the ability to do something about false and problematic content online.

Still, deepfakes (videos, photos and audio manipulated by artificial intelligence to make something look or sound real) and misinformation have drastically changed social media over the past decade, highlighting the challenges of content moderation and serious implications for consumers, politics and public health.

At the same time, generative AI — with ChatGPT at the forefront — changes the scale of these problems and even challenges digital literacy skills recommended to scrutinize online content, as well as radically reshaping content on social media.

The transition from Twitter to X — which has 1.3 billion users — and the rise of TikTok — with 232 million downloads in 2023 — highlight how social media experiences have evolved as a result.

From colleagues at conferences discussing why they’ve left LinkedIn and students asking if they really need to use it, people recognize the decrease in quality of content on that platform (and others) due to bots, AI and the incentives to produce more content.

LinkedIn has established itself as key to career development, yet some say it is not preserving expectations of trustworthiness and legitimacy associated with professional networks or protecting contributors.

In some ways, the reverse is true: User data is being used to train LinkedIn Learning’s AI coaching with an expert lens that is already being monetized as a “professional development” opportunity for paid LinkedIn Premium users.

Regulation of AI is needed as well as enhanced consumer protection around technology. Users cannot meaningfully consent to use platforms and their ever changing terms of services without transparency about what will happen with an individual’s engagement data and content.

Not everything can be solved by users. Market-driven regulation is failing us.

There needs to be meaningful alternatives and the ability to opt out. It can be as simple as individuals reporting content for moderation. For example, when multiple people flag content for review, it is more likely to get to a human moderator, who research shows is key to effective content moderation, including removal and appropriate labeling.

Collective action is also needed. Communities can address problems of false information by working together to report concerns and collaboratively engineer recommendation systems via engagement to deprioritize false and damaging content.

Professionals must also build trust with the communities they serve, so that they can promote reliable sources and develop digital literacy around sources of misinformation and the ways AI promotes and generates it. Policymakers must also regulate social media more carefully.

Truth matters to an informed electorate in order to preserve safety of online spaces for children and professional networks, and to maintain mental health. We cannot leave it up to the companies who caused the problem to fix it.

Read More

A woman typing on her laptop.

Pop-ups on federal websites blaming Democrats for the shutdown spark Hatch Act concerns, raising questions about neutrality in government communications.

Getty Images, Igor Suka

When Federal Websites Get Political: The Hatch Act in the Digital Age

As the federal government entered a shutdown on October 1st, a new controversy emerged over how federal agencies communicate during political standoffs. Pop-ups and banners appeared on agency websites blaming one side of Congress for the funding lapse, prompting questions about whether such messaging violated federal rules meant to keep government communications neutral. The episode has drawn bipartisan concern and renewed scrutiny of the Hatch Act, a 1939 law that governs political activity in federal workplaces.

The Shutdown and Federal Website Pop-ups

The government shutdown began after negotiations over the federal budget collapsed. Republicans, who control both chambers of Congress, needed Democratic support in the Senate to pass a series of funding bills, or Continuing Resolutions, but failed to reach an agreement before the deadline. In the hours before the shutdown took effect, the Department of Housing and Urban Development, or HUD, posted a full-screen red banner stating, “The Radical Left in Congress shut down the government. HUD will use available resources to help Americans in need.” Users could not access the website until clicking through the message.

Keep ReadingShow less
Congress Must Lead On AI While It Still Can
a computer chip with the letter a on top of it
Photo by Igor Omilaev on Unsplash

Congress Must Lead On AI While It Still Can

Last month, Matthew and Maria Raine testified before Congress, describing how their 16-year-old son confided suicidal thoughts to AI chatbots, only to be met with validation, encouragement, and even help drafting a suicide note. The Raines are among multiple families who have recently filed lawsuits alleging that AI chatbots were responsible for their children’s suicides. Their deaths, now at the center of lawsuits against AI companies, underscore a similar argument playing out in federal courts: artificial intelligence is no longer an abstraction of the future; it is already shaping life and death.

And these teens are not outliers. According to Common Sense Media, a nonprofit dedicated to improving the lives of kids and families, 72 percent of teenagers report using AI companions, often relying on them for emotional support. This dependence is developing far ahead of any emerging national safety standard.

Keep ReadingShow less
A person on using a smartphone.

With millions of child abuse images reported annually and AI creating new dangers, advocates are calling for accountability from Big Tech and stronger laws to keep kids safe online.

Getty Images, ljubaphoto

Parents: It’s Time To Get Mad About Online Child Sexual Abuse

Forty-five years ago this month, Mothers Against Drunk Driving had its first national press conference, and a global movement to stop impaired driving was born. MADD was founded by Candace Lightner after her 13-year-old daughter was struck and killed by a drunk driver while walking to a church carnival in 1980. Terms like “designated driver” and the slogan “Friends don’t let friends drive drunk” came out of MADD’s campaigning, and a variety of state and federal laws, like a lowered blood alcohol limit and legal drinking age, were instituted thanks to their advocacy. Over time, social norms evolved, and driving drunk was no longer seen as a “folk crime,” but a serious, conscious choice with serious consequences.

Movements like this one, started by fed-up, grieving parents working with law enforcement and law makers, worked to lower road fatalities nationwide, inspire similar campaigns in other countries, and saved countless lives.

Keep ReadingShow less
King, Pope, Jedi, Superman: Trump’s Social Media Images Exclusively Target His Base and Try To Blur Political Reality

Two Instagram images put out by the White House.

White House Instagram

King, Pope, Jedi, Superman: Trump’s Social Media Images Exclusively Target His Base and Try To Blur Political Reality

A grim-faced President Donald J. Trump looks out at the reader, under the headline “LAW AND ORDER.” Graffiti pictured in the corner of the White House Facebook post reads “Death to ICE.” Beneath that, a photo of protesters, choking on tear gas. And underneath it all, a smaller headline: “President Trump Deploys 2,000 National Guard After ICE Agents Attacked, No Mercy for Lawless Riots and Looters.”

The official communication from the White House appeared on Facebook in June 2025, after Trump sent in troops to quell protests against Immigration and Customs Enforcement agents in Los Angeles. Visually, it is melodramatic, almost campy, resembling a TV promotion.

Keep ReadingShow less