Skip to content
Search

Latest Stories

Top Stories

Outrage Over Accuracy: What the Los Angeles Protests Teach About Democracy Online

Opinion

A person looking at social media app icons on a phone
A different take on social media and democracy
Matt Cardy/Getty Images

In Los Angeles this summer, immigration raids sparked days of street protests and a heavy government response — including curfews and the deployment of National Guard troops. But alongside the demonstrations came another, quieter battle: the fight over truth. Old protest videos resurfaced online as if they were new, AI-generated clips blurred the line between fact and fiction, and conspiracy theories about “paid actors” flooded social media feeds.

What played out in Los Angeles was not unique. It is the same dynamic Maria Ressa warned about when she accepted the Nobel Peace Prize in 2021. She described disinformation as an “invisible atomic bomb” — a destabilizing force that, like the bomb of 1945, demands new rules and institutions to contain its damage. After Hiroshima and Nagasaki, the world created the United Nations and a framework of international treaties to prevent nuclear catastrophe. Ressa argues that democracy faces a similar moment now: just as we built global safeguards for atomic power, we must now create a digital rule of law to safeguard the information systems that shape civic life.


Her analysis runs deeper still. Ressa often cites a 2018 MIT study showing that false news spreads “farther, faster, deeper, and more broadly” than the truth online — not because of bots, but because people are drawn to shock and novelty. What makes this more alarming, she argues, is that platforms profit from the distortion. As she put it, Russian bot armies and fake accounts generated “more engagement — and higher revenue,” turning disinformation into a business model.

The same incentives were visible in Los Angeles, where AI-generated protest clips and recycled footage spread quickly because outrage was rewarded more than accuracy. In the Philippines, Maria Ressa documented how Facebook became the primary battleground for disinformation: coordinated networks of fake accounts pushed false narratives to silence journalists and smear critics, all while boosting platform engagement and ad revenue. Russian disinformation campaigns followed a similar logic, using bot armies and troll farms to flood social media with polarizing content, especially during elections abroad. In both cases — as in Los Angeles — truth had to fight against algorithms designed to reward virality and profit, rather than accuracy.

However, in Los Angeles, fact-checkers and journalists worked quickly to trace clips back to their sources, local outlets published clear comparisons, and officials corrected false claims in real-time. These actions didn’t erase the misinformation, but they showed that resilience is possible.

Still, relying only on journalists, nonprofits, or volunteers is not enough. The burden of defending truth should not fall on underfunded newsrooms or a handful of civic groups scrambling during crises. If democracy is to withstand the “invisible atomic bomb” of disinformation, these defenses must be institutionalized — built into the very framework of governance.

Other countries offer lessons. In the European Union, for example, the Digital Services Act requires platforms to be more transparent about algorithms and to respond quickly to harmful disinformation. During elections, EU regulators can require platforms to report on how they monitor and address manipulation campaigns and impose fines for failures. While imperfect, it shows what institutional accountability can look like: not ad-hoc firefighting, but clear rules backed by enforcement.

The U.S. has yet to take such comprehensive steps. But the experience of Los Angeles suggests why it matters. Without institutional rails, communities will be forced to fight disinformation slowly, while platforms continue to profit from the chaos. With them, we could shift from reactive fixes to a sustainable digital rule of law.

And there is reason for hope. Studies show that media literacy programs can help citizens spot falsehoods more accurately. Community fact-checking has helped reduce the spread of misinformation online. Local collaborations among journalists, educators, and civic groups are already laying the groundwork for a more resilient democracy. These efforts prove that Americans are not powerless in the face of disinformation.

Maria Ressa’s metaphor was stark, but her message was not despair. The atomic bomb analogy was also about response — about building new institutions to meet an undeniable threat. If Americans can make a digital rule of law with the same urgency, then the age of disinformation need not be democracy’s undoing. It could become the moment when democracy reinvents itself for the digital age.

Maria Eduarda Grill is a student from Brazil studying Global Affairs and Economics at the University of Notre Dame. She is a fellow with Common Ground Journalism and a researcher with the Kellogg Institute, where she studies digital governance and media freedom in Latin America.

The Fulcrum's Executive Editor, Hugo Balta is an instructor with Commmon Ground Journalism. He is an accredited solutions journalism and complicating the narratives trainer.


Read More

MQ-9 Predator Drones Hunt Migrants at the Border
Way into future, RPA Airmen participate in Red Flag 16-2 > Creech ...

MQ-9 Predator Drones Hunt Migrants at the Border

FT HUACHUCA, Ariz. - Inside a windowless and dark shipping container turned into a high-tech surveillance command center, two analysts peered at their own set of six screens that showed data coming in from an MQ-9 Predator B drone. Both were looking for two adults and a child who had crossed the U.S.-Mexico border and had fled when a Border Patrol agent approached in a truck.

Inside the drone hangar on the other side of the Fort Huachuca base sat another former shipping container, this one occupied by a drone pilot and a camera operator who pivoted the drone's camera to scan nine square miles of shrubs and saguaros for the migrants. Like the command center, the onetime shipping container was dark, lit only by the glow of the computer screens.

Keep ReadingShow less
A child holding a smartphone.

As children scroll through endless violence on their screens, experts warn of a mental health crisis fueled by trauma, desensitization, and the erosion of empathy.

Trauma Through Screens: Are We Failing the Children?

The first time I watched the video of George Floyd’s final moments as he gasped for air, recorded on a smartphone for the world to witness, it was May 2020, and it was gut-wrenching to see a man’s life end in such a horrific way with just a click.

That single video, captured by a bystander, spread across over 1.3 billion screens and sent a shockwave throughout the country. It forced people to confront the brutality of racial injustice in a way that could not be ignored, filtered, or explained away.

Keep ReadingShow less
A person on their phone, using a type of artificial intelligence.

AI is transforming the workplace faster than ever. Experts warn that automation could reshape jobs, wages, and opportunities for millions of American workers.

Getty Images, d3sign

AI Reshapes the American Workplace—But Where Are the Jobs?

In recent years, American workers have been going through an unprecedented experiment in how we work. During the COVID pandemic and social distancing, U.S. businesses embraced the latest online technologies to vastly expand remote work. That, in turn, ushered in the slow creep of artificial intelligence (AI) applications into every crack and seam of society, including in the workplace.

If 2023 was about increasing adoption of AI coming out of the pandemic, experts are saying 2025-26 will be when companies implement deeper changes in the workplace based on ever more pervasive AI.

Keep ReadingShow less
A child looking at a cellphone at night.

AI is changing childhood. Kevin Frazier explains why it's critical for parents and mentors to start having the “AI talk” and teach kids safe, responsible AI use.

Getty Images, Elva Etienne

The New Talk: The Need To Discuss AI With Kids

“[I]t is a massively more powerful and scary thing than I knew about.” That’s how Adam Raine’s dad characterized ChatGPT when he reviewed his son’s conversations with the AI tool. Adam tragically died by suicide. His parents are now suing OpenAI and Sam Altman, the company’s CEO, based on allegations that the tool contributed to his death.

This tragic story has rightfully caused a push for tech companies to institute changes and for lawmakers to institute sweeping regulations. While both of those strategies have some merit, computer code and AI-related laws will not address the underlying issue: our kids need guidance from their parents, educators, and mentors about how and when to use AI.

Keep ReadingShow less