Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Outrage Over Accuracy: What the Los Angeles Protests Teach About Democracy Online

Opinion

A person looking at social media app icons on a phone
A different take on social media and democracy
Matt Cardy/Getty Images

In Los Angeles this summer, immigration raids sparked days of street protests and a heavy government response — including curfews and the deployment of National Guard troops. But alongside the demonstrations came another, quieter battle: the fight over truth. Old protest videos resurfaced online as if they were new, AI-generated clips blurred the line between fact and fiction, and conspiracy theories about “paid actors” flooded social media feeds.

What played out in Los Angeles was not unique. It is the same dynamic Maria Ressa warned about when she accepted the Nobel Peace Prize in 2021. She described disinformation as an “invisible atomic bomb” — a destabilizing force that, like the bomb of 1945, demands new rules and institutions to contain its damage. After Hiroshima and Nagasaki, the world created the United Nations and a framework of international treaties to prevent nuclear catastrophe. Ressa argues that democracy faces a similar moment now: just as we built global safeguards for atomic power, we must now create a digital rule of law to safeguard the information systems that shape civic life.


Her analysis runs deeper still. Ressa often cites a 2018 MIT study showing that false news spreads “farther, faster, deeper, and more broadly” than the truth online — not because of bots, but because people are drawn to shock and novelty. What makes this more alarming, she argues, is that platforms profit from the distortion. As she put it, Russian bot armies and fake accounts generated “more engagement — and higher revenue,” turning disinformation into a business model.

The same incentives were visible in Los Angeles, where AI-generated protest clips and recycled footage spread quickly because outrage was rewarded more than accuracy. In the Philippines, Maria Ressa documented how Facebook became the primary battleground for disinformation: coordinated networks of fake accounts pushed false narratives to silence journalists and smear critics, all while boosting platform engagement and ad revenue. Russian disinformation campaigns followed a similar logic, using bot armies and troll farms to flood social media with polarizing content, especially during elections abroad. In both cases — as in Los Angeles — truth had to fight against algorithms designed to reward virality and profit, rather than accuracy.

However, in Los Angeles, fact-checkers and journalists worked quickly to trace clips back to their sources, local outlets published clear comparisons, and officials corrected false claims in real-time. These actions didn’t erase the misinformation, but they showed that resilience is possible.

Still, relying only on journalists, nonprofits, or volunteers is not enough. The burden of defending truth should not fall on underfunded newsrooms or a handful of civic groups scrambling during crises. If democracy is to withstand the “invisible atomic bomb” of disinformation, these defenses must be institutionalized — built into the very framework of governance.

Other countries offer lessons. In the European Union, for example, the Digital Services Act requires platforms to be more transparent about algorithms and to respond quickly to harmful disinformation. During elections, EU regulators can require platforms to report on how they monitor and address manipulation campaigns and impose fines for failures. While imperfect, it shows what institutional accountability can look like: not ad-hoc firefighting, but clear rules backed by enforcement.

The U.S. has yet to take such comprehensive steps. But the experience of Los Angeles suggests why it matters. Without institutional rails, communities will be forced to fight disinformation slowly, while platforms continue to profit from the chaos. With them, we could shift from reactive fixes to a sustainable digital rule of law.

And there is reason for hope. Studies show that media literacy programs can help citizens spot falsehoods more accurately. Community fact-checking has helped reduce the spread of misinformation online. Local collaborations among journalists, educators, and civic groups are already laying the groundwork for a more resilient democracy. These efforts prove that Americans are not powerless in the face of disinformation.

Maria Ressa’s metaphor was stark, but her message was not despair. The atomic bomb analogy was also about response — about building new institutions to meet an undeniable threat. If Americans can make a digital rule of law with the same urgency, then the age of disinformation need not be democracy’s undoing. It could become the moment when democracy reinvents itself for the digital age.

Maria Eduarda Grill is a student from Brazil studying Global Affairs and Economics at the University of Notre Dame. She is a fellow with Common Ground Journalism and a researcher with the Kellogg Institute, where she studies digital governance and media freedom in Latin America.

The Fulcrum's Executive Editor, Hugo Balta is an instructor with Commmon Ground Journalism. He is an accredited solutions journalism and complicating the narratives trainer.



Read More

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal
Getty Images, Kmatta

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal

Background

The Health Insurance Portability and Accountability Act (HIPAA) was enacted in 1996 to protect sensitive health information from being disclosed without patients’ consent. Under this act, a patient’s privacy is safeguarded through the enforcement of strict standards on managing, transmitting, and storing health information.

Keep ReadingShow less
Two people looking at screens.

A case for optimism, risk-taking, and policy experimentation in the age of AI—and why pessimism threatens technological progress.

Getty Images, Andriy Onufriyenko

In Defense of AI Optimism

Society needs people to take risks. Entrepreneurs who bet on themselves create new jobs. Institutions that gamble with new processes find out best to integrate advances into modern life. Regulators who accept potential backlash by launching policy experiments give us a chance to devise laws that are based on evidence, not fear.

The need for risk taking is all the more important when society is presented with new technologies. When new tech arrives on the scene, defense of the status quo is the easier path--individually, institutionally, and societally. We are all predisposed to think that the calamities, ailments, and flaws we experience today--as bad as they may be--are preferable to the unknowns tied to tomorrow.

Keep ReadingShow less
Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less