Skip to content
Search

Latest Stories

Top Stories

Dealing with false facts: How to correct online misinformation

Side-by-side images, one with a computer overlay

A comparison of an original and deepfake video of Facebook CEO Mark Zuckerberg.

Elyse Samuels/The Washington Post via Getty Images

Sanfilippo is an assistant professor in the School of Information Sciences at the University of Illinois Urbana-Champaign and book series editor for Cambridge Studies on Governing Knowledge Commons. She is a public voices fellow of The OpEd Project.

Deepfakes of celebrities and misinformation about public figures might not be new in 2024, but they are more common and many people seem to grow ever more resigned that they are inevitable.

The problems posed by false online content extend far beyond public figures, impacting everyone, including youth.


New York Mayor Eric Adams in a recent press conference emphasized that many depend on platforms to fix these problems, but that parents, voters and policymakers need to take action. “These companies are well aware that negative, frightening and outrageous content generates continued engagement and greater revenue,” Adams said.

Recent efforts by Taylor Swift’s fans, coordinated via #ProtectTaylorSwift, to take down, bury, and correct fake and obscene content about her are a welcome and hopeful story about the ability to do something about false and problematic content online.

Still, deepfakes (videos, photos and audio manipulated by artificial intelligence to make something look or sound real) and misinformation have drastically changed social media over the past decade, highlighting the challenges of content moderation and serious implications for consumers, politics and public health.

Sign up for The Fulcrum newsletter

At the same time, generative AI — with ChatGPT at the forefront — changes the scale of these problems and even challenges digital literacy skills recommended to scrutinize online content, as well as radically reshaping content on social media.

The transition from Twitter to X — which has 1.3 billion users — and the rise of TikTok — with 232 million downloads in 2023 — highlight how social media experiences have evolved as a result.

From colleagues at conferences discussing why they’ve left LinkedIn and students asking if they really need to use it, people recognize the decrease in quality of content on that platform (and others) due to bots, AI and the incentives to produce more content.

LinkedIn has established itself as key to career development, yet some say it is not preserving expectations of trustworthiness and legitimacy associated with professional networks or protecting contributors.

In some ways, the reverse is true: User data is being used to train LinkedIn Learning’s AI coaching with an expert lens that is already being monetized as a “professional development” opportunity for paid LinkedIn Premium users.

Regulation of AI is needed as well as enhanced consumer protection around technology. Users cannot meaningfully consent to use platforms and their ever changing terms of services without transparency about what will happen with an individual’s engagement data and content.

Not everything can be solved by users. Market-driven regulation is failing us.

There needs to be meaningful alternatives and the ability to opt out. It can be as simple as individuals reporting content for moderation. For example, when multiple people flag content for review, it is more likely to get to a human moderator, who research shows is key to effective content moderation, including removal and appropriate labeling.

Collective action is also needed. Communities can address problems of false information by working together to report concerns and collaboratively engineer recommendation systems via engagement to deprioritize false and damaging content.

Professionals must also build trust with the communities they serve, so that they can promote reliable sources and develop digital literacy around sources of misinformation and the ways AI promotes and generates it. Policymakers must also regulate social media more carefully.

Truth matters to an informed electorate in order to preserve safety of online spaces for children and professional networks, and to maintain mental health. We cannot leave it up to the companies who caused the problem to fix it.

Read More

Computer image of a person speaking
ArtemisDiana/Getty Images

Overcoming AI voice cloning attacks on election integrity

Levine is an election integrity and management consultant who works to ensure that eligible voters can vote, free and fair elections are perceived as legitimate, and election processes are properly administered and secured.

Imagine it’s Election Day. You’re getting ready to go vote when you receive a call from a public official telling you to vote at an early voting location rather than your Election Day polling site. So, you go there only to discover it’s closed. Turns out that the call wasn’t from the public official but from a replica created by voice cloning technology.

That might sound like something out of a sci-fi movie, but many New Hampshire voters experienced something like it two days before the 2024 presidential primary. They received robocalls featuring a deepfake simulating the voice of President Joe Biden that discouraged them from participating in the primary.

Keep ReadingShow less
Robotic hand holding a ballot
Alfieri/Getty Images

What happens when voters cede their ballots to AI agents?

Frazier is an assistant professor at the Crump College of Law at St. Thomas University. Starting this summer, he will serve as a Tarbell fellow.

With the supposed goal of diversifying the electorate and achieving more representative results, State Y introduces “VoteGPT.” This artificial intelligence agent studies your social media profiles, your tax returns and your streaming accounts to develop a “CivicU.” This artificial clone would use that information to serve as your democratic proxy.

Keep ReadingShow less
Sen. Ron Johnson in front of a chart

Sen. Ron Johnson claims President Biden has allowed 1,700 terrorists to enter the country. That total refers to encounters (people who were stopped)

Tom Williams/CQ-Roll Call, Inc via Getty Images

Has President Joe Biden ‘let in’ nearly 1,700 people with links to terrorism?

This fact brief was originally published by Wisconsin Watch. Read the original here. Fact briefs are published by newsrooms in the Gigafact network, and republished by The Fulcrum. Visit Gigafact to learn more.

Has President Joe Biden ‘let in’ nearly 1,700 people with links to terrorism?

No.

Border agents have encountered individuals on the federal terrorist watchlist nearly 1,700 times since President Joe Biden took office — that means those people were stopped while trying to enter the U.S.

Keep ReadingShow less
Social media app icons
hapabapa/Getty Images

Urban planning can counter social media’s impact on young people

Dr. Jones is a grassroot urban planner, architectural designer, and public policy advocate. She was recently a public voice fellow through The OpEd Project.

Despite the breathtaking beauty of our world, many young people remain oblivious to it, ensnared by the all-consuming grip of social media. A recent Yale Medicine report revealed the rising negative impact social media has on teens, as this digital entrapment rewires their brains and leads to alarming mental and physical health struggles. Tragically, they are deprived of authentic life experiences, having grown up in a reality where speculation overshadows genuine interactions.

For the sake of our society’s future, we must urgently curb social media’s dominance and promote real-world exploration through urban planning that ensures accessible, enriching environments for all economic levels to safeguard the mental and physical health of the young.

Keep ReadingShow less
podcast mic in the middle of a red and blue America
Topdesigner/Getty Images

Fellowship brings Gen Z voices into democracy and podcasting

Spinelle is the founder of The Democracy Group podcast network and the communications lead for the McCourtney Institute for Democracy at Penn State.

According to Edison Research, nearly half of Gen Z are monthly podcast listeners. But their voices are largely absent from podcasts about democracy, civic engagement and civil discourse. The Democracy Group’s podcast fellowship, which recently completed its third cohort, aims to change that.

Keep ReadingShow less