Skip to content
Search

Latest Stories

Top Stories

Outrage Over Accuracy: What the Los Angeles Protests Teach About Democracy Online

Opinion

A person looking at social media app icons on a phone
A different take on social media and democracy
Matt Cardy/Getty Images

In Los Angeles this summer, immigration raids sparked days of street protests and a heavy government response — including curfews and the deployment of National Guard troops. But alongside the demonstrations came another, quieter battle: the fight over truth. Old protest videos resurfaced online as if they were new, AI-generated clips blurred the line between fact and fiction, and conspiracy theories about “paid actors” flooded social media feeds.

What played out in Los Angeles was not unique. It is the same dynamic Maria Ressa warned about when she accepted the Nobel Peace Prize in 2021. She described disinformation as an “invisible atomic bomb” — a destabilizing force that, like the bomb of 1945, demands new rules and institutions to contain its damage. After Hiroshima and Nagasaki, the world created the United Nations and a framework of international treaties to prevent nuclear catastrophe. Ressa argues that democracy faces a similar moment now: just as we built global safeguards for atomic power, we must now create a digital rule of law to safeguard the information systems that shape civic life.


Her analysis runs deeper still. Ressa often cites a 2018 MIT study showing that false news spreads “farther, faster, deeper, and more broadly” than the truth online — not because of bots, but because people are drawn to shock and novelty. What makes this more alarming, she argues, is that platforms profit from the distortion. As she put it, Russian bot armies and fake accounts generated “more engagement — and higher revenue,” turning disinformation into a business model.

The same incentives were visible in Los Angeles, where AI-generated protest clips and recycled footage spread quickly because outrage was rewarded more than accuracy. In the Philippines, Maria Ressa documented how Facebook became the primary battleground for disinformation: coordinated networks of fake accounts pushed false narratives to silence journalists and smear critics, all while boosting platform engagement and ad revenue. Russian disinformation campaigns followed a similar logic, using bot armies and troll farms to flood social media with polarizing content, especially during elections abroad. In both cases — as in Los Angeles — truth had to fight against algorithms designed to reward virality and profit, rather than accuracy.

However, in Los Angeles, fact-checkers and journalists worked quickly to trace clips back to their sources, local outlets published clear comparisons, and officials corrected false claims in real-time. These actions didn’t erase the misinformation, but they showed that resilience is possible.

Still, relying only on journalists, nonprofits, or volunteers is not enough. The burden of defending truth should not fall on underfunded newsrooms or a handful of civic groups scrambling during crises. If democracy is to withstand the “invisible atomic bomb” of disinformation, these defenses must be institutionalized — built into the very framework of governance.

Other countries offer lessons. In the European Union, for example, the Digital Services Act requires platforms to be more transparent about algorithms and to respond quickly to harmful disinformation. During elections, EU regulators can require platforms to report on how they monitor and address manipulation campaigns and impose fines for failures. While imperfect, it shows what institutional accountability can look like: not ad-hoc firefighting, but clear rules backed by enforcement.

The U.S. has yet to take such comprehensive steps. But the experience of Los Angeles suggests why it matters. Without institutional rails, communities will be forced to fight disinformation slowly, while platforms continue to profit from the chaos. With them, we could shift from reactive fixes to a sustainable digital rule of law.

And there is reason for hope. Studies show that media literacy programs can help citizens spot falsehoods more accurately. Community fact-checking has helped reduce the spread of misinformation online. Local collaborations among journalists, educators, and civic groups are already laying the groundwork for a more resilient democracy. These efforts prove that Americans are not powerless in the face of disinformation.

Maria Ressa’s metaphor was stark, but her message was not despair. The atomic bomb analogy was also about response — about building new institutions to meet an undeniable threat. If Americans can make a digital rule of law with the same urgency, then the age of disinformation need not be democracy’s undoing. It could become the moment when democracy reinvents itself for the digital age.

Maria Eduarda Grill is a student from Brazil studying Global Affairs and Economics at the University of Notre Dame. She is a fellow with Common Ground Journalism and a researcher with the Kellogg Institute, where she studies digital governance and media freedom in Latin America.

The Fulcrum's Executive Editor, Hugo Balta is an instructor with Commmon Ground Journalism. He is an accredited solutions journalism and complicating the narratives trainer.


Read More

Person on a smartphone.

The digital public square rewards outrage over empathy. To save democracy, we must redesign our online spaces to prioritize dialogue, trust, and civility.

Getty Images, Tiwaporn Khemwatcharalerd

Rebuilding Civic Trust in the Age of Algorithmic Division

A headline about a new education policy flashes across a news-aggregation app. Within minutes, the comment section fills: one reader suggests the proposal has merit; a dozen others pounce. Words like idiot, sheep, and propaganda fly faster than the article loads. No one asks what the commenter meant. The thread scrolls on—another small fire in a forest already smoldering.

It’s a small scene, but it captures something larger: how the public square has turned reactive by design. The digital environments where citizens now meet were built to reward intensity, not inquiry. Each click, share, and outrage serves an invisible metric that prizes attention over understanding.

Keep ReadingShow less
A woman typing on her laptop.

Pop-ups on federal websites blaming Democrats for the shutdown spark Hatch Act concerns, raising questions about neutrality in government communications.

Getty Images, Igor Suka

When Federal Websites Get Political: The Hatch Act in the Digital Age

As the federal government entered a shutdown on October 1st, a new controversy emerged over how federal agencies communicate during political standoffs. Pop-ups and banners appeared on agency websites blaming one side of Congress for the funding lapse, prompting questions about whether such messaging violated federal rules meant to keep government communications neutral. The episode has drawn bipartisan concern and renewed scrutiny of the Hatch Act, a 1939 law that governs political activity in federal workplaces.

The Shutdown and Federal Website Pop-ups

The government shutdown began after negotiations over the federal budget collapsed. Republicans, who control both chambers of Congress, needed Democratic support in the Senate to pass a series of funding bills, or Continuing Resolutions, but failed to reach an agreement before the deadline. In the hours before the shutdown took effect, the Department of Housing and Urban Development, or HUD, posted a full-screen red banner stating, “The Radical Left in Congress shut down the government. HUD will use available resources to help Americans in need.” Users could not access the website until clicking through the message.

Keep ReadingShow less
Congress Must Lead On AI While It Still Can
a computer chip with the letter a on top of it
Photo by Igor Omilaev on Unsplash

Congress Must Lead On AI While It Still Can

Last month, Matthew and Maria Raine testified before Congress, describing how their 16-year-old son confided suicidal thoughts to AI chatbots, only to be met with validation, encouragement, and even help drafting a suicide note. The Raines are among multiple families who have recently filed lawsuits alleging that AI chatbots were responsible for their children’s suicides. Their deaths, now at the center of lawsuits against AI companies, underscore a similar argument playing out in federal courts: artificial intelligence is no longer an abstraction of the future; it is already shaping life and death.

And these teens are not outliers. According to Common Sense Media, a nonprofit dedicated to improving the lives of kids and families, 72 percent of teenagers report using AI companions, often relying on them for emotional support. This dependence is developing far ahead of any emerging national safety standard.

Keep ReadingShow less
A person on using a smartphone.

With millions of child abuse images reported annually and AI creating new dangers, advocates are calling for accountability from Big Tech and stronger laws to keep kids safe online.

Getty Images, ljubaphoto

Parents: It’s Time To Get Mad About Online Child Sexual Abuse

Forty-five years ago this month, Mothers Against Drunk Driving had its first national press conference, and a global movement to stop impaired driving was born. MADD was founded by Candace Lightner after her 13-year-old daughter was struck and killed by a drunk driver while walking to a church carnival in 1980. Terms like “designated driver” and the slogan “Friends don’t let friends drive drunk” came out of MADD’s campaigning, and a variety of state and federal laws, like a lowered blood alcohol limit and legal drinking age, were instituted thanks to their advocacy. Over time, social norms evolved, and driving drunk was no longer seen as a “folk crime,” but a serious, conscious choice with serious consequences.

Movements like this one, started by fed-up, grieving parents working with law enforcement and law makers, worked to lower road fatalities nationwide, inspire similar campaigns in other countries, and saved countless lives.

Keep ReadingShow less