Skip to content
Search

Latest Stories

Top Stories

Dealing with false facts: How to correct online misinformation

Side-by-side images, one with a computer overlay

A comparison of an original and deepfake video of Facebook CEO Mark Zuckerberg.

Elyse Samuels/The Washington Post via Getty Images

Sanfilippo is an assistant professor in the School of Information Sciences at the University of Illinois Urbana-Champaign and book series editor for Cambridge Studies on Governing Knowledge Commons. She is a public voices fellow of The OpEd Project.

Deepfakes of celebrities and misinformation about public figures might not be new in 2024, but they are more common and many people seem to grow ever more resigned that they are inevitable.

The problems posed by false online content extend far beyond public figures, impacting everyone, including youth.


New York Mayor Eric Adams in a recent press conference emphasized that many depend on platforms to fix these problems, but that parents, voters and policymakers need to take action. “These companies are well aware that negative, frightening and outrageous content generates continued engagement and greater revenue,” Adams said.

Recent efforts by Taylor Swift’s fans, coordinated via #ProtectTaylorSwift, to take down, bury, and correct fake and obscene content about her are a welcome and hopeful story about the ability to do something about false and problematic content online.

Still, deepfakes (videos, photos and audio manipulated by artificial intelligence to make something look or sound real) and misinformation have drastically changed social media over the past decade, highlighting the challenges of content moderation and serious implications for consumers, politics and public health.

Sign up for The Fulcrum newsletter

At the same time, generative AI — with ChatGPT at the forefront — changes the scale of these problems and even challenges digital literacy skills recommended to scrutinize online content, as well as radically reshaping content on social media.

The transition from Twitter to X — which has 1.3 billion users — and the rise of TikTok — with 232 million downloads in 2023 — highlight how social media experiences have evolved as a result.

From colleagues at conferences discussing why they’ve left LinkedIn and students asking if they really need to use it, people recognize the decrease in quality of content on that platform (and others) due to bots, AI and the incentives to produce more content.

LinkedIn has established itself as key to career development, yet some say it is not preserving expectations of trustworthiness and legitimacy associated with professional networks or protecting contributors.

In some ways, the reverse is true: User data is being used to train LinkedIn Learning’s AI coaching with an expert lens that is already being monetized as a “professional development” opportunity for paid LinkedIn Premium users.

Regulation of AI is needed as well as enhanced consumer protection around technology. Users cannot meaningfully consent to use platforms and their ever changing terms of services without transparency about what will happen with an individual’s engagement data and content.

Not everything can be solved by users. Market-driven regulation is failing us.

There needs to be meaningful alternatives and the ability to opt out. It can be as simple as individuals reporting content for moderation. For example, when multiple people flag content for review, it is more likely to get to a human moderator, who research shows is key to effective content moderation, including removal and appropriate labeling.

Collective action is also needed. Communities can address problems of false information by working together to report concerns and collaboratively engineer recommendation systems via engagement to deprioritize false and damaging content.

Professionals must also build trust with the communities they serve, so that they can promote reliable sources and develop digital literacy around sources of misinformation and the ways AI promotes and generates it. Policymakers must also regulate social media more carefully.

Truth matters to an informed electorate in order to preserve safety of online spaces for children and professional networks, and to maintain mental health. We cannot leave it up to the companies who caused the problem to fix it.

Read More

Sign that erads "LOVE every vote)

A sign fell to the ground outside the Pennsylvania Convention Center, the central ballot counting facility in Philadelphia, on Nov. 5, 2020.

Bastiaan Slabbers/NurPhoto via Getty Images

Election experts in Pennsylvania expect quicker results than 2020

Kickols is the communications manager for the Election Reformers Network.

Several election law authorities, elected officials and election administration experts came together recently to discuss potential mail-in ballot counting delays, the challenges of reporting on inaccurate fraud claims, and other election dynamics on the horizon in Pennsylvania. And yet they had a positive message: The Keystone State is well-positioned to count ballots faster this fall.

The discussion took place during an online event with media hosted by the Election Overtime Project, which supports journalists in their coverage of close and contested elections. Election Overtime is an initiative of the Election Reformers Network.

Keep ReadingShow less
People looking at a humanoid robot

Spectators look at Tesla's Core Technology Optimus humanoid robot at a conference in Shanghai, China, in September.

CFOTO/Future Publishing via Getty Images

Rainy day fund would help people who lose their jobs thanks to AI

Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.

Artificial intelligence will eliminate jobs.

Companies may not need as many workers as AI increases productivity. Others may simply be swapped out for automated systems. Call it what you want — displacement, replacement or elimination — but the outcome is the same: stagnant, struggling communities. The open question is whether we will learn from mistakes. Will we proactively take steps to support the communities most likely to bear the cost of “innovation.”

Keep ReadingShow less
Doctor using AI technology
Akarapong Chairean/Getty Images

What's next for the consumer revolution in health care?

Pearl, the author of “ChatGPT, MD,” teaches at both the Stanford University School of Medicine and the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.

For years, patients have wondered why health care can’t be as seamless as other services in their lives. They can book flights or shop for groceries with a few clicks, yet they still need to take time off work and drive to the doctor’s office for routine care.

Two advances are now changing thisoutdated model and ushering in a new era of health care consumerism. With at-home diagnostics and generative artificial intelligence, patients are beginning to take charge of their health in wayspreviously unimaginable.

Keep ReadingShow less
Close-up of boy looking at his phone in the dark
Anastasiia Sienotova/Getty Images

Reality bytes: Kids confuse the real world with the screen world

Patel is an executive producer/director, the creator of “ConnectEffect” and a Builders movement partner.

Doesn’t it feel like summer break just began? Yet here we are again. Fall’s arrival means kids have settled into a new school year with new teachers, new clothes and a new “attitude” for parents and kids alike, to start on the right foot.

Yet it’s hard for any of us to find footing in an increasingly polarized and isolated world. The entire nation is grappling with a rising tide of mental health concerns — including the continually increasing alienation and loneliness in children — and parents are struggling to foster real human connection for their kids in the real world. The battle to minimize screen time is certainly one approach. But in a world that is based on screens, apps and social media, is it a battle that realistically can be won?

Keep ReadingShow less