Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Misinformation isn’t just coming from your cranky uncle on Facebook

Opinion

Misinformation on Facebook
Marcela Vieira

Minichiello writes The Sift newsletter for educators at the News Literacy Project.

I see misinformation all the time. Scrolling through Instagram, I saw a musician I follow sharing false posts about the Israel-Hamas war. Out to eat at a restaurant, a server making friendly small talk shared true crime content she finds online — while rattling off names of accounts that I later discovered were conspiracy-minded. A friend of mine thinks the Infowars conspiracy theories site is a delight. And there’s my relative who started entertaining the idea that the world is flat after watching YouTube videos.

Misinformation affects everything from our health decisions to our personal relationships to business to, of course, democracy. It’s easy to get angry when we’re confronted with misinformation — that’s what it’s designed to do — but learning how to sort fact from fiction online while also practicing empathy will go a long way in fixing the misinformation crisis.


And the News Literacy Project, where I now work after a career in journalism, can teach people how to identify credible news.

First, if you’re fortunate enough not to live in a local news desert, you should get your information from your local newspaper, radio station or TV news programs. Beware of online “news” sites that are partisan propaganda. If they have very few or no local stories but plenty of politically charged articles, it’s a red flag. Also, follow news outlets that adhere to journalism standards and ethics, such as being transparent about corrections. Quality local news not only empowers us individually, but studies show it’s also good for democracy.

Second, when navigating difficult conversations where misinformation may come up, try practicing what NLP calls “PEP”: patience, empathy and persistence. You can’t convince people of anything in one heated conversation, but if you listen you can walk away with a better understanding of how they developed their beliefs.

“We don’t all think the same,” said one of my relatives, who refused to get a Covid-19 booster shot after seeing misinformation about how it might affect someone’s hair.

I responded: “I’m not saying we need to think the same thing. I’m saying we need to know what’s true before we agree or disagree.”

It’s hard when someone you love is repeating falsehoods, but that doesn’t mean they’re not intelligent or they are a bad person. We’re living in a system where social media platforms don’t properly moderate content for misinformation and, in fact, incentivize its spread through outrage. Laws and regulations haven’t caught up to how quickly artificial intelligence technologies — including misleading images, videos and posts — are being developed.

Additionally, remember that none of us are exempt from falling for misinformation. It’s embarrassing, but it’s happened to me too. During the early days of the pandemic, photos spread online of dolphins swimming in Venice canals. It seemed mesmerizing and magical. Of course, those photos later turned out to be false. And I felt like a fool – something to remember the next time I spar with a friend over misinformation.

It’s a systemic problem. We can do our part to counter it by seeking out credible sources of news. We can keep in mind that it’s easy to be fooled by falsehoods. And we can engage in tough, face-to-face conversations with the people we care about, but lead with empathy — not accusations.

One tactic that has helped me immensely — not just in countering misinformation I see online but also when talking through hot misinformation topics with loved ones — is to simply take ... a ... pause. This is actually a news literacy skill. Being news-literate means being able to identify credible information, which is often as simple as pausing to confirm whether something is true before sharing it online or in conversations.

It also means seeking out credible news sources like your local TV station or paper. By practicing news literacy skills, we’ll be better equipped to find trustworthy information and engage in difficult conversations about conspiracy theories and misinformation.

During National News Literacy Week, which runs Jan. 22-26, you can learn how to push back on misinformation and empower yourself and your communities to seek quality, vetted information. Join us in our effort to build a national movement to advance the practice of news literacy throughout American society, creating better informed, more engaged and more empowered individuals — and ultimately a stronger democracy.


Read More

The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less