Skip to content
Search

Latest Stories

Top Stories

Dealing with false facts: How to correct online misinformation

Side-by-side images, one with a computer overlay

A comparison of an original and deepfake video of Facebook CEO Mark Zuckerberg.

Elyse Samuels/The Washington Post via Getty Images

Sanfilippo is an assistant professor in the School of Information Sciences at the University of Illinois Urbana-Champaign and book series editor for Cambridge Studies on Governing Knowledge Commons. She is a public voices fellow of The OpEd Project.

Deepfakes of celebrities and misinformation about public figures might not be new in 2024, but they are more common and many people seem to grow ever more resigned that they are inevitable.

The problems posed by false online content extend far beyond public figures, impacting everyone, including youth.


New York Mayor Eric Adams in a recent press conference emphasized that many depend on platforms to fix these problems, but that parents, voters and policymakers need to take action. “These companies are well aware that negative, frightening and outrageous content generates continued engagement and greater revenue,” Adams said.

Recent efforts by Taylor Swift’s fans, coordinated via #ProtectTaylorSwift, to take down, bury, and correct fake and obscene content about her are a welcome and hopeful story about the ability to do something about false and problematic content online.

Still, deepfakes (videos, photos and audio manipulated by artificial intelligence to make something look or sound real) and misinformation have drastically changed social media over the past decade, highlighting the challenges of content moderation and serious implications for consumers, politics and public health.

At the same time, generative AI — with ChatGPT at the forefront — changes the scale of these problems and even challenges digital literacy skills recommended to scrutinize online content, as well as radically reshaping content on social media.

The transition from Twitter to X — which has 1.3 billion users — and the rise of TikTok — with 232 million downloads in 2023 — highlight how social media experiences have evolved as a result.

From colleagues at conferences discussing why they’ve left LinkedIn and students asking if they really need to use it, people recognize the decrease in quality of content on that platform (and others) due to bots, AI and the incentives to produce more content.

LinkedIn has established itself as key to career development, yet some say it is not preserving expectations of trustworthiness and legitimacy associated with professional networks or protecting contributors.

In some ways, the reverse is true: User data is being used to train LinkedIn Learning’s AI coaching with an expert lens that is already being monetized as a “professional development” opportunity for paid LinkedIn Premium users.

Regulation of AI is needed as well as enhanced consumer protection around technology. Users cannot meaningfully consent to use platforms and their ever changing terms of services without transparency about what will happen with an individual’s engagement data and content.

Not everything can be solved by users. Market-driven regulation is failing us.

There needs to be meaningful alternatives and the ability to opt out. It can be as simple as individuals reporting content for moderation. For example, when multiple people flag content for review, it is more likely to get to a human moderator, who research shows is key to effective content moderation, including removal and appropriate labeling.

Collective action is also needed. Communities can address problems of false information by working together to report concerns and collaboratively engineer recommendation systems via engagement to deprioritize false and damaging content.

Professionals must also build trust with the communities they serve, so that they can promote reliable sources and develop digital literacy around sources of misinformation and the ways AI promotes and generates it. Policymakers must also regulate social media more carefully.

Truth matters to an informed electorate in order to preserve safety of online spaces for children and professional networks, and to maintain mental health. We cannot leave it up to the companies who caused the problem to fix it.

Read More

The Importance of Being Media Literate

An image depicting a group of people of varying ages interacting with different forms of media, such as smartphones, tablets, and laptops.

AI generated

The Importance of Being Media Literate

Information is constantly on our phones, and we receive notifications for almost everything happening in the world, which can be overwhelming to many. Information is given to us in an instant, and more often than you think, we don’t even know what exactly we are reading.

We don’t even know if the information we see is accurate or makes sense. Media literacy goes beyond what we learn in school; it’s a skill that grows as we become more aware and critical of the information we consume.

Keep ReadingShow less
Fox News’ Selective Silence: How Trump’s Worst Moments Vanish From Coverage
Why Fox News’ settlement with Dominion Voting Systems is good news for all media outlets
Getty Images

Fox News’ Selective Silence: How Trump’s Worst Moments Vanish From Coverage

Last week, the ultraconservative news outlet, NewsMax, reached a $73 million settlement with the voting machine company, Dominion, in essence, admitting that they lied in their reporting about the use of their voting machines to “rig” or distort the 2020 presidential election. Not exactly shocking news, since five years later, there is no credible evidence to suggest any malfeasance regarding the 2020 election. To viewers of conservative media, such as Fox News, this might have shaken a fully embraced conspiracy theory. Except it didn’t, because those viewers haven’t seen it.

Many people have a hard time understanding why Trump enjoys so much support, given his outrageous statements and damaging public policy pursuits. Part of the answer is due to Fox News’ apparent censoring of stories that might be deemed negative to Trump. During the past five years, I’ve tracked dozens of examples of news stories that cast Donald Trump in a negative light, including statements by Trump himself, which would make a rational person cringe. Yet, Fox News has methodically censored these stories, only conveying rosy news that draws its top ratings.

Keep ReadingShow less
U.S. Flag / artificial intelligence / technology / congress / ai

The age of AI warrants asking if the means still further the ends—specifically, individual liberty and collective prosperity.

Getty Images, Douglas Rissing

Liberty and the General Welfare in the Age of AI

If the means justify the ends, we’d still be operating under the Articles of Confederation. The Founders understood that the means—the governmental structure itself—must always serve the ends of liberty and prosperity. When the means no longer served those ends, they experimented with yet another design for their government—they did expect it to be the last.

The age of AI warrants asking if the means still further the ends—specifically, individual liberty and collective prosperity. Both of those goals were top of mind for early Americans. They demanded the Bill of Rights to protect the former, and they identified the latter—namely, the general welfare—as the animating purpose for the government. Both of those goals are being challenged by constitutional doctrines that do not align with AI development or even undermine it. A full review of those doctrines could fill a book (and perhaps one day it will). For now, however, I’m just going to raise two.

Keep ReadingShow less
An illustration of AI chat boxes.

An illustration of AI chat boxes.

Getty Images, Andriy Onufriyenko

In Defense of ‘AI Mark’

Earlier this week, a member of the UK Parliament—Mark Sewards—released an AI tool (named “AI Mark”) to assist with constituent inquiries. The public response was rapid and rage-filled. Some people demanded that the member of Parliament (MP) forfeit part of his salary—he's doing less work, right? Others called for his resignation—they didn't vote for AI; they voted for him! Many more simply questioned his thinking—why on earth did he think outsourcing such sensitive tasks to AI would be greeted with applause?

He's not the only elected official under fire for AI use. The Prime Minister of Sweden, Ulf Kristersson, recently admitted to using AI to study various proposals before casting votes. Swedes, like the Brits, have bombarded Kristersson with howls of outrage.

Keep ReadingShow less