Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Outrage Over Accuracy: What the Los Angeles Protests Teach About Democracy Online

Opinion

A person looking at social media app icons on a phone
A different take on social media and democracy
Matt Cardy/Getty Images

In Los Angeles this summer, immigration raids sparked days of street protests and a heavy government response — including curfews and the deployment of National Guard troops. But alongside the demonstrations came another, quieter battle: the fight over truth. Old protest videos resurfaced online as if they were new, AI-generated clips blurred the line between fact and fiction, and conspiracy theories about “paid actors” flooded social media feeds.

What played out in Los Angeles was not unique. It is the same dynamic Maria Ressa warned about when she accepted the Nobel Peace Prize in 2021. She described disinformation as an “invisible atomic bomb” — a destabilizing force that, like the bomb of 1945, demands new rules and institutions to contain its damage. After Hiroshima and Nagasaki, the world created the United Nations and a framework of international treaties to prevent nuclear catastrophe. Ressa argues that democracy faces a similar moment now: just as we built global safeguards for atomic power, we must now create a digital rule of law to safeguard the information systems that shape civic life.


Her analysis runs deeper still. Ressa often cites a 2018 MIT study showing that false news spreads “farther, faster, deeper, and more broadly” than the truth online — not because of bots, but because people are drawn to shock and novelty. What makes this more alarming, she argues, is that platforms profit from the distortion. As she put it, Russian bot armies and fake accounts generated “more engagement — and higher revenue,” turning disinformation into a business model.

The same incentives were visible in Los Angeles, where AI-generated protest clips and recycled footage spread quickly because outrage was rewarded more than accuracy. In the Philippines, Maria Ressa documented how Facebook became the primary battleground for disinformation: coordinated networks of fake accounts pushed false narratives to silence journalists and smear critics, all while boosting platform engagement and ad revenue. Russian disinformation campaigns followed a similar logic, using bot armies and troll farms to flood social media with polarizing content, especially during elections abroad. In both cases — as in Los Angeles — truth had to fight against algorithms designed to reward virality and profit, rather than accuracy.

However, in Los Angeles, fact-checkers and journalists worked quickly to trace clips back to their sources, local outlets published clear comparisons, and officials corrected false claims in real-time. These actions didn’t erase the misinformation, but they showed that resilience is possible.

Still, relying only on journalists, nonprofits, or volunteers is not enough. The burden of defending truth should not fall on underfunded newsrooms or a handful of civic groups scrambling during crises. If democracy is to withstand the “invisible atomic bomb” of disinformation, these defenses must be institutionalized — built into the very framework of governance.

Other countries offer lessons. In the European Union, for example, the Digital Services Act requires platforms to be more transparent about algorithms and to respond quickly to harmful disinformation. During elections, EU regulators can require platforms to report on how they monitor and address manipulation campaigns and impose fines for failures. While imperfect, it shows what institutional accountability can look like: not ad-hoc firefighting, but clear rules backed by enforcement.

The U.S. has yet to take such comprehensive steps. But the experience of Los Angeles suggests why it matters. Without institutional rails, communities will be forced to fight disinformation slowly, while platforms continue to profit from the chaos. With them, we could shift from reactive fixes to a sustainable digital rule of law.

And there is reason for hope. Studies show that media literacy programs can help citizens spot falsehoods more accurately. Community fact-checking has helped reduce the spread of misinformation online. Local collaborations among journalists, educators, and civic groups are already laying the groundwork for a more resilient democracy. These efforts prove that Americans are not powerless in the face of disinformation.

Maria Ressa’s metaphor was stark, but her message was not despair. The atomic bomb analogy was also about response — about building new institutions to meet an undeniable threat. If Americans can make a digital rule of law with the same urgency, then the age of disinformation need not be democracy’s undoing. It could become the moment when democracy reinvents itself for the digital age.

Maria Eduarda Grill is a student from Brazil studying Global Affairs and Economics at the University of Notre Dame. She is a fellow with Common Ground Journalism and a researcher with the Kellogg Institute, where she studies digital governance and media freedom in Latin America.

The Fulcrum's Executive Editor, Hugo Balta is an instructor with Commmon Ground Journalism. He is an accredited solutions journalism and complicating the narratives trainer.



Read More

Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less