Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Dealing with false facts: How to correct online misinformation

Opinion

Side-by-side images, one with a computer overlay

A comparison of an original and deepfake video of Facebook CEO Mark Zuckerberg.

Elyse Samuels/The Washington Post via Getty Images

Sanfilippo is an assistant professor in the School of Information Sciences at the University of Illinois Urbana-Champaign and book series editor for Cambridge Studies on Governing Knowledge Commons. She is a public voices fellow of The OpEd Project.

Deepfakes of celebrities and misinformation about public figures might not be new in 2024, but they are more common and many people seem to grow ever more resigned that they are inevitable.

The problems posed by false online content extend far beyond public figures, impacting everyone, including youth.


New York Mayor Eric Adams in a recent press conference emphasized that many depend on platforms to fix these problems, but that parents, voters and policymakers need to take action. “These companies are well aware that negative, frightening and outrageous content generates continued engagement and greater revenue,” Adams said.

Recent efforts by Taylor Swift’s fans, coordinated via #ProtectTaylorSwift, to take down, bury, and correct fake and obscene content about her are a welcome and hopeful story about the ability to do something about false and problematic content online.

Still, deepfakes (videos, photos and audio manipulated by artificial intelligence to make something look or sound real) and misinformation have drastically changed social media over the past decade, highlighting the challenges of content moderation and serious implications for consumers, politics and public health.

At the same time, generative AI — with ChatGPT at the forefront — changes the scale of these problems and even challenges digital literacy skills recommended to scrutinize online content, as well as radically reshaping content on social media.

The transition from Twitter to X — which has 1.3 billion users — and the rise of TikTok — with 232 million downloads in 2023 — highlight how social media experiences have evolved as a result.

From colleagues at conferences discussing why they’ve left LinkedIn and students asking if they really need to use it, people recognize the decrease in quality of content on that platform (and others) due to bots, AI and the incentives to produce more content.

LinkedIn has established itself as key to career development, yet some say it is not preserving expectations of trustworthiness and legitimacy associated with professional networks or protecting contributors.

In some ways, the reverse is true: User data is being used to train LinkedIn Learning’s AI coaching with an expert lens that is already being monetized as a “professional development” opportunity for paid LinkedIn Premium users.

Regulation of AI is needed as well as enhanced consumer protection around technology. Users cannot meaningfully consent to use platforms and their ever changing terms of services without transparency about what will happen with an individual’s engagement data and content.

Not everything can be solved by users. Market-driven regulation is failing us.

There needs to be meaningful alternatives and the ability to opt out. It can be as simple as individuals reporting content for moderation. For example, when multiple people flag content for review, it is more likely to get to a human moderator, who research shows is key to effective content moderation, including removal and appropriate labeling.

Collective action is also needed. Communities can address problems of false information by working together to report concerns and collaboratively engineer recommendation systems via engagement to deprioritize false and damaging content.

Professionals must also build trust with the communities they serve, so that they can promote reliable sources and develop digital literacy around sources of misinformation and the ways AI promotes and generates it. Policymakers must also regulate social media more carefully.

Truth matters to an informed electorate in order to preserve safety of online spaces for children and professional networks, and to maintain mental health. We cannot leave it up to the companies who caused the problem to fix it.


Read More

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less
Meta Undermining Trust but Verify through Paid Links
Facebook launches voting resource tool
Facebook launches voting resource tool

Meta Undermining Trust but Verify through Paid Links

Facebook is testing limits on shared external links, which would become a paid feature through their Meta Verified program, which costs $14.99 per month.

This change solidifies that verification badges are now meaningless signifiers. Yet it wasn’t always so; the verified internet was built to support participation and trust. Beginning with Twitter’s verification program launched in 2009, a checkmark next to a username indicated that an account had been verified to represent a notable person or official account for a business. We could believe that an elected official or a brand name was who they said they were online. When Twitter Blue, and later X Premium, began to support paid blue checkmarks in November of 2022, the visual identification of verification became deceptive. Think Fake Eli Lilly accounts posting about free insulin and impersonation accounts for Elon Musk himself.

This week’s move by Meta echoes changes at Twitter/X, despite the significant evidence that it leaves information quality and user experience in a worse place than before. Despite what Facebook says, all this tells anyone is that you paid.

Keep ReadingShow less
artificial intelligence

Rather than blame AI for young Americans struggling to find work, we need to build: build new educational institutions, new retraining and upskilling programs, and, most importantly, new firms.

Surasak Suwanmake/Getty Images

Blame AI or Build With AI? Only One Approach Creates Jobs

We’re failing young Americans. Many of them are struggling to find work. Unemployment among 16- to 24-year-olds topped 10.5% in August. Even among those who do find a job, many of them are settling for lower-paying roles. More than 50% of college grads are underemployed. To make matters worse, the path forward to a more stable, lucrative career is seemingly up in the air. High school grads in their twenties find jobs at nearly the same rate as those with four-year degrees.

We have two options: blame or build. The first involves blaming AI, as if this new technology is entirely to blame for the current economic malaise facing Gen Z. This course of action involves slowing or even stopping AI adoption. For example, there’s so-called robot taxes. The thinking goes that by placing financial penalties on firms that lean into AI, there will be more roles left to Gen Z and workers in general. Then there’s the idea of banning or limiting the use of AI in hiring and firing decisions. Applicants who have struggled to find work suggest that increased use of AI may be partially at fault. Others have called for providing workers with a greater say in whether and to what extent their firm uses AI. This may help firms find ways to integrate AI in a way that augments workers rather than replace them.

Keep ReadingShow less