Skip to content
Search

Latest Stories

Follow Us:
Top Stories

How to Break the ‘Rage Bait’ Cycle and Restore Trust in U.S. Democracy

Opinion

How to Break the ‘Rage Bait’ Cycle and Restore Trust in U.S. Democracy
Young woman talking on phone at laptop desk.
Photo by Vitaly Gariev on Unsplash

Recently, Oxford University Press chose its word of the year for 2025: “rage bait.” For those who don’t know, it’s defined as “online content deliberately designed to elicit anger or outrage by being frustrating, provocative, or offensive.” Rage bait is also the driving force behind one of the most powerful industries in the United States: social media. It has become a debasement of the American media establishment, though a key piece of federal law could help alleviate the issue.

First, the prevalence and scale of rage bait should be established. Though rage bait lacks a precise definition, by combining anecdotally available information about its popularity with social media algorithms that reward such popularity, it can be inferred that there is quite a lot of rage bait out there. Numerous studies, including research from Yale and the University of Chicago, among others, have found that posts that provoke anger and outrage are more likely to be interacted with (i.e., liked, commented on, replied to, etc.) and to remain visible for longer periods, leading social media algorithms to increasingly recommend this content. This creates an environment for the creator that equates rage bait with success; for them, the more outrageous the content, the more likes, shares, and follows it gets, which encourages even more outrageous content. In addition, creators themselves can profit from rage bait if they gain enough of a following. This is how politics is becoming increasingly polarized, especially in teenagers' minds, whose brains are malleable and are exposed to the most rage bait. Social media companies also reap the benefits of uncontrolled online rage; it keeps people on the platforms longer and more often, creating more opportunities for advertisement, which naturally means more cash flowing into the coffers. Once the mainstream media discovered that rage bait created larger profits, they seized the opportunity. Researchers in New Zealand have found that the number of headlines that induce anger, disgust, fear, and sadness has increased in recent years, while joyful or neutral headlines have been steadily decreasing. Teenagers especially have borne the brunt of the negative impacts of social media, with rates of depression and anxiety skyrocketing, according to a study in the Journal of Adolescent Health. The result of all of this is both simple and depressing: Americans generally feel worse about themselves, those around them, and their government.


Recent data has found that Americans’ trust has been crashing. Only around 19% of Americans trust their government, and only 1 in 3 Americans trust each other, according to the Pew Research Center. Especially in politics, trust is vital. It creates a bond between the voter and elected officials that ensures that the voter’s wishes will be carried out. However, it seems that social media, and by extension rage bait, is going a long way in severing this bond. The lack of trust in government and in democracy generally has led to a loss of support for more established, moderate politicians. In their place, rage bait has encouraged more fringe candidates on both sides that tend to cause an uproar across the aisle. This uproar and outrage manifest themselves on social media, and, as discussed before, nothing spreads as fast as outrage. This creates a cycle: ragebait encourages fringe politicians, who create fringe policy, which outrages the other side, magnifying their presence on social media, building momentum for more fringe candidates to be elected, and the cycle continues. All the while, Americans’ trust in each other and in their government crumbles. Yet, by altering a specific piece of federal law, the cycle could be broken.

Section 230 of the Communications Decency Act has quickly become a point of contention in the debate over the balance between freedom of speech and responsible content moderation. Section 230 contains two key provisions, Section 230(c)(1) and Section 230(c)(2). Section 230(c)(1) states that companies cannot be treated as the publisher or the speaker of information on their platforms that is provided by a third party. Section 230(c)(2) states that companies, when acting in good faith, can’t be held liable for restricting objectionable content (violence, harassment, obscene content, etc.). Essentially, companies aren’t liable for information on their platform that they themselves didn’t say, and they can restrict content that is considered harmful, within reason. In the context of curbing rage bait, none of this is objectionable; the real problem is what this legislation is missing. While companies are allowed to take down objectionable content, and are legally protected if they do so, they are not obligated to. When considering that some of the most common and impactful forms of rage bait appear as blatant misinformation or obscene content, if companies choose to take it down, they are protected by the law. However, they choose not to because rage bait has become a profit machine.

Section 230 should be revised to include a provision that if companies are found to be keeping objectionable content on their platforms without making any effort to take it down, they should be legally responsible with heavy fines. Both effort and the amount fined should be judged by how much the company invests in content moderation and how efficiently it takes down harmful content, all relative to the company's economic and operational size and capabilities. Additionally, the definition of objectionable content should be updated to better reflect modern times (Section 230 was written 30 years ago and was initially part of an effort to control obscene and indecent content on the early Internet). While creating this definition will be difficult, it should most likely be created by a responsible third party, such as a panel of experts, composed of nonpartisan health experts and legal scholars, appointed by Congress. Companies would agree to follow the panel’s guidance, and the government would agree to give it legal teeth. Of course, this would draw immediate opposition from social media companies.

One might have the immediate reaction that this is a clear violation of free speech; they might believe it would give the government the right to restrict citizens’ speech. While the line between harmful content and legitimate political discourse is blurry, an updated definition, as discussed above, could clear the air. Regardless, harmful content has crossed the line into “free speech,” and it is now yielding dangerous results. After all, the links between social media (specifically, the harmful content on it) and depression, anxiety, self-harm, and suicidal ideation are well established, especially among teenagers. It is undeniably a life-or-death issue. The second major argument that social media companies will make is that, given the massive volume of content, it would be practically impossible to remove all harmful content. Still, there are huge steps that social media companies would take, if not for the possible harm to their own income. Meta netted over $2.7 billion in September alone, and half of that could hire up to 20,000 content moderators in the US (and many more if the job were outsourced to other countries), which would go a long way toward protecting consumers. At the sacrifice of extra profit, protections could very realistically be implemented.

Harmful online content is constantly evolving; rage bait is just the current iteration. The oncoming wave of generative AI will no doubt increase the severity of the issue, as it becomes harder to distinguish truth from misinformation. It’s not difficult to argue that the ship has sailed on regulation and solutions; the impacts, especially on America’s youth, are devastating. However, it is a weak argument, especially given the solution's clarity. Fixing Section 230 could have massive positive impacts on the future of online content, Americans’ health, and the preservation of a strong, moderate democracy. As a high school student, I am saddened by what I hear from my peers, stories about seeing videos of beheadings in Afghanistan or clips on Reddit of Charlie Kirk’s assassination. I am well aware that forces against this change are enormous, with endless wallets and increasing political power. Regardless, the benefits of reforming Section 230 outweigh the costs of any obstacles. With some key legislative reform, we can provide a brighter future for American youth and the democracy they will inherit.

Asher Nanas is a Senior at New West Charter High School in Los Angeles, California.


Read More

Capitol Building of USA

Senate votes increasingly pass with support from senators representing a minority of Americans, raising questions about representation, rules, and democracy.

Getty Images, ANDREY DENISYUK

Record Number of Bills and Nominations Passed With Senators Representing a Population Minority

From taxes to the environment to public broadcasting like PBS and NPR, the Senate has recently passed record levels of legislation and confirmed record numbers of nominations with senators representing less than half the people.

Using historical data, GovTrack found 56 examples of Senate votes on legislation that passed with senators representing a “population minority.” 26 of those 56 examples, nearly half, have occurred since President Donald Trump’s current term began.

Keep ReadingShow less
The Fahey Q&A with Elizabeth Rasmussen

An in-depth interview with Elizabeth Rasmussen of Better Boundaries on Utah’s redistricting battle, Proposition 4, and the fight to protect ballot initiatives, fair maps, and democratic accountability.

The Fahey Q&A with Elizabeth Rasmussen

Since organizing the Voters Not Politicians 2018 ballot initiative that put citizens in charge of drawing Michigan's legislative maps, Fahey has been the founding executive director of The People, which is forming statewide networks to promote government accountability. She regularly interviews colleagues in the world of democracy reform for The Fulcrum.

Elizabeth Rasmussen is the Executive Director for Better Boundaries, a Utah-based organization fighting for fair maps, defending the citizen initiative process, preserving checks and balances, and building a better future. Currently making headlines in the state, Better Boundaries is working to protect Proposition 4, and with it, the rights of Utah voters.

Keep ReadingShow less
Trump's Delusion of Grandeur Knows No Bounds

U.S. President Donald Trump walks off Air Force One at Miami International Airport on April 11, 2026 in Miami, Florida. President Trump came to town to attend a UFC Fight.

Getty Images, Tasos Katopodis

Trump's Delusion of Grandeur Knows No Bounds

There has been no shortage of evidence of Trump's grandiosity. See my article, "Trump, The Poster Child of a Megalogamiac." But now comes new evidence of his delusion of grandeur that is even worse.

Recently, on his Truth Social media account, he posted an AI generated image of himself as Jesus healing the sick, apparently in part response to Pope Leo's rebuking of the U.S. (Hegseth) for invoking the name of Jesus for support in battle, saying Jesus “does not listen to the prayers of those who wage war, but rejects them,” together with a diatribe against Pope Leo in another post saying he was very liberal, liked crime, and was only elected because Trump had been elected..

Keep ReadingShow less