Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Social media and political violence – how to break the cycle

Person stopping dominoes from falling over

With effort, it’s possible to shift the national discourse and reduce political violence.

Gajus/Getty Images

Forno is principal lecturer in computer science and electrical engineering at the University of Maryland, Baltimore County.

The attempted assassination of Donald Trump on July 13, 2024, added more fuel to an already fiery election season. In this case, political violence was carried out against the party that is most often found espousing it. The incident shows how uncontrollable political violence can be — and how dangerous the current times are for America.

Part of the complication is the contentious and adversarial nature of American politics, of course. But technology makes it more difficult for Americans to understand sudden news developments.


Gone are the days when only a handful of media outlets reported the news to broad swaths of society after rigorous fact-checking by professional journalists.

By contrast, anyone today can “report” news online, provide what they claim is “analysis” of events, and combine fact, fiction, speculation and opinion to fit a desired narrative or political perspective.

Then that perspective is potentially made to seem legitimate by virtue of the poster’s official office, net worth, number of social media followers, or attention from mainstream news organizations seeking to fill news cycles.

And that’s before any mention of convincing deepfake audio and video clips, whose lies and misrepresentations can further sow confusion and distrust online and in society.

Today’s internet-based narratives also often involve personal attacks either directly or through inference and suggestion — what experts call “stochastic terrorism” that can motivate people to violence. Political violence is the inevitable result — and has been for years, including attacks on U.S. Rep. Gabby Giffords, former House Speaker Nancy Pelosi’s husband, Paul, the 2017 congressional baseball practice shooting, the Jan. 6, 2021 insurrection, and now the attempted assassination of a former president running for the White House again.

When bullets and conspiracies fly

As a security and internet researcher it was entirely predictable to me that within minutes of the attack, right-wing social media exploded with instant-reaction narratives that assigned blame to political rivals, the media, or implied that a sinister “inside job” by the federal government was behind the incident.

But it wasn’t just average internet users or prominent business magnates fanning these flames. Several Republicans issued such statements from their official social media accounts. For instance, less than an hour after the attack, Georgia Congressman Mike Collins accused President Joe Biden of “inciting an assassination” and said Biden “sent the orders.” Ohio Senator J.D. Vance, now Trump’s nominee for vice president, also implied that Biden was responsible for the attack.

The bloodied former president stood up and delayed his Secret Service evacuation for a fist-pumping photo before leaving the rally, and his campaign issued a defiant fundraising email later that evening. This led some Trump critics to suggest the incident was a “false flag” attack staged to earn a sympathetic national spotlight. Others claimed the incident fits into Trump’s ongoing messaging to supporters that he’s the victim of persecution.

From a historical perspective, it’s worth noting former Brazil right-wing President Jair Bolsonaro survived an assassination attempt in 2018 to become the country’s next president in 2019.

It’s long been known that internet narratives, memes and content can spread around the world like wildfire well before the actual truth becomes known. Unfortunately, those narratives, whether factual or fictional, can get picked up — and thus given a degree of perceived legitimacy and further disseminated — by traditional news organizations.

Many who see such messages, amplified by both social media and traditional news services, often believe them — and some may respond with political violence or terrorism.

Can anything help?

Several threads of research show that there are some ways regular people can help break this dangerous cycle.

In the immediate aftermath of breaking news, it’s important to remember that first reports often are wrong, incomplete or inaccurate. Rather than rushing to repost things during rapidly developing news events, it’s best to avoid retweeting, reposting or otherwise amplifying online content right away. When information has been confirmed by multiple credible sources, ideally across the political spectrum, then it’s likely safe enough to believe and share.

In the longer term, as a nation and a society, it will be useful to further understand how technology and human tendencies interact. Teaching schoolchildren more about media literacy and critical thinking can help prepare future citizens to separate fact from fiction in a complex world filled with competing information.

Another potential approach is to expand civics and history lessons in school classrooms, to give students the ability to learn from the past and — we can all hope — not repeat its mistakes.

Social media companies are part of the potential solution, too. In recent years, they have disbanded teams meant to monitor content and boost users’ trust in the information available on their platforms. Recent Supreme Court rulings make clear that these companies are free to actively police their platforms for disinformation, misinformation and conspiracy theories if they wish. But companies and purported “free speech absolutists” including X owner Elon Musk, who refuse to remove controversial, though technically legal, internet content from their platforms may well endanger public safety.

Traditional media organizations bear responsibility for objectively informing the public without giving voice to unverified conspiracy theories or misinformation. Ideally, qualified guests invited to news programs will add useful facts and informed opinion to the public discourse instead of speculation. And serious news hosts will avoid the rhetorical technique of “just asking questions” or engaging in “bothsiderism” as ways to move fringe theories — often from the internet — into the news cycle, where they gain traction and amplification.

The public has a role, too.

Responsible citizens could focus on electing officials and supporting political parties that refuse to embrace conspiracy theories and personal attacks as normal strategies. Voters could make clear that they will reward politicians who focus on policy accomplishments, not their media imagery and social media follower counts.

That could, over time, deliver the message that the spectacle of modern internet political narratives generally serve no useful purpose beyond sowing social discord and degrading the ability of government to function — and potentially leading to political violence and terrorism.

Understandably, these are not instant remedies. Many of these efforts will take time — potentially even years — and money and courage to accomplish.

Until then, maybe Americans can revisit the golden rule — doing onto others what we would have them do unto us. Emphasizing facts in the news cycle, integrity in the public square, and media literacy in our schools seem like good places to start as well.

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Read More

The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less