Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The indecent nature of social media

Opinion

Social media icons
Chesnot/Getty Images

O’Rourke, a senior at Emory University studying economics and philosophy, is an intern at the Bridge Alliance, which operates The Fulcrum.

Only about a decade ago, social media use among adults in the United States became the norm — garnering the attention of more than 50 percent of the population for the first time since the technology was introduced. Since then, the effects of social media cannot be overstated. Indeed, social media and digital technology use has become the dominant arena of social engagement, evidenced by a recent report finding that the average American spends nearly five hours per day on their phone, and over three hours per day on social media.

As digital technology and social media rose in prominence, a legitimate debate arose over whether they were a force for good or evil. Some argue that social media is an excellent tool for connecting people across previously insurmountable boundaries, empowering the voices of the marginalized and democratizing access to information. Yet others believe that the rise of digital technology increases tribalism among groups and promotes isolation among individuals. Though the resolution to this debate was unsettled for a long while, the jury has reached its verdict — and the outcome should not be a surprise.


While it is difficult to capture all the harms that social media has caused on both an individual and societal level, Jonathan Haidt, a renowned social psychologist from New York University, has spent the past several years attempting to do so. His research has found that social media platforms prey on individuals’ — particularly adolescent girls’ — self-consciousness by “put[ting] the size of their friend groups on public display, and subject[ing] their physical appearance to the hard metrics of likes and comment counts.” Haidt goes on to document the strikingly parallel trajectories of social media popularity within this group and mental health disorders, finding skyrocketing rates of depression and self-harm from 2010 to 2014, the same time social media use became the norm among high-school-age girls.

The harms of social media are not merely reserved for teenagers, but extend to the health of our liberal democracy and the strength of our social fabric as well. Because individuals select whom they follow, social media platforms become fertile ground for echo chambers. This siloing of like-minded individuals also poses an epistemic problem: If each group has its own, isolated claim to truth — unable to be checked or verified by those who disagree — then it becomes nearly impossible to distinguish truth from falsehood.

Unsurprisingly, this also creates an environment of toxic partisanship and polarization, in which each chamber produces increasingly inflammatory content — later used as fodder for even more inflammatory content for the opposition. And when this is coupled with plummeting trust in American institutions, it is clear how social media tears at the seams of our social fabric.

So why has social media become such a dark place?

In essence, these myriad negative effects are baked into the very structure of these platforms. That is, social media’s multilayered incentive framework — on both the individual and business levels — produces outcomes that preclude platforms from fulfilling their purpose “to make the world more open and connected.”

On the individual level, social media fundamentally changes the way we communicate with each other, “turn[ing] so much [of our] communication into a public performance,” as put by Haidt. Instead of communicating by way of one-on-one conversation, platform-based communications are usually crafted to win the favor of the group(s) to which we belong. And because we operate in a social and political climate in which these online communities are often pitted against each other, this form of communication regularly rewards outrage and moral grandstanding — winning support within our tribe by castigating the opposition. Needless to say, this creates an environment that is inhospitable to open inquiry and honest engagement.

While the incentives among individuals participating in like-minded social groups often reward suboptimal content, so too do the business models of most social media platforms. Because the revenue for these platforms comes from advertisers, their primary business interest is to keep users engaged for as long as possible. This means the content that produces the most engagement — in the form of clicks, likes, pageviews, etc. — is the content that will be promoted most heavily by the platforms’ algorithms. But as we have seen, the most engaging content is typically that which has the strongest appeal to the tribe, not that which is socially optimal.

So where does this all leave us? Are there reforms that can salvage this technology, or is social media a lost cause?

The answer to these questions are difficult, because each problem that social media creates likely warrants its own solution. To reduce the harm caused to teenage girls, for example, platforms could raise or more stringently enforce their minimum age requirement (which is only 13). But more fundamentally, the corrective to many of these social ills comes through restructuring the individual-level incentive framework. If online communities were to reward and engage with socially beneficial content, then the platforms’ engagement-optimizing algorithms could actually be used for good.

Many have called for the platforms to do this work themselves, adopting policies that flag and obstruct content that they deem suspect. And while this strategy seems good in theory, it has failed in practice. Consider, for example, Facebook’s policy that prevented users from sharing a New York Post article suggesting “the coronavirus may have leaked from a lab,” a once taboo hypothesis that is now completely acceptable. While Facebook may have been trying to limit the spread of misinformation, its employees demonstrated they are not equipped to determine what content is socially beneficial.

Because it can be so difficult to parse socially beneficial content from tribal content, commentaries like Jeff Garson’s are particularly useful. Garson believes that a better functioning incentive structure is built on interpersonal decency, not tribal appeal. If values such as respect, understanding and appreciation were paramount on our platforms, socially beneficial content would flourish.

While creating a platform that incentivizes users to organically promote better content seems like a tall order, it is surely not impossible. As Jonathan Rauch illuminates in his seminal “ The Constitution of Knowledge,” Wikipedia serves as a great example. The online encyclopedia — boasting more than 55 million pages of content — has successfully created an incentive structure that rewards truth. Although any user can edit a Wikipedia page, errors are swiftly corrected, and accurate information prevails. Surely, social media is not meant to function like an encyclopedia, and it remains to be seen if platforms and participants have the will to make much-needed changes. But Wikipedia demonstrates that socially beneficial incentive structures can be created and maintained among large internet communities.

To overcome the plethora of negative effects created by social media — from increased anxiety and depression to the erosion of our institutions of liberal democracy — we must amend the incentive structure, rewarding that which is truly good, not merely that which makes us feel good.


Read More

Two people looking at screens.

A case for optimism, risk-taking, and policy experimentation in the age of AI—and why pessimism threatens technological progress.

Getty Images, Andriy Onufriyenko

In Defense of AI Optimism

Society needs people to take risks. Entrepreneurs who bet on themselves create new jobs. Institutions that gamble with new processes find out best to integrate advances into modern life. Regulators who accept potential backlash by launching policy experiments give us a chance to devise laws that are based on evidence, not fear.

The need for risk taking is all the more important when society is presented with new technologies. When new tech arrives on the scene, defense of the status quo is the easier path--individually, institutionally, and societally. We are all predisposed to think that the calamities, ailments, and flaws we experience today--as bad as they may be--are preferable to the unknowns tied to tomorrow.

Keep ReadingShow less
Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less
Meta Undermining Trust but Verify through Paid Links
Facebook launches voting resource tool
Facebook launches voting resource tool

Meta Undermining Trust but Verify through Paid Links

Facebook is testing limits on shared external links, which would become a paid feature through their Meta Verified program, which costs $14.99 per month.

This change solidifies that verification badges are now meaningless signifiers. Yet it wasn’t always so; the verified internet was built to support participation and trust. Beginning with Twitter’s verification program launched in 2009, a checkmark next to a username indicated that an account had been verified to represent a notable person or official account for a business. We could believe that an elected official or a brand name was who they said they were online. When Twitter Blue, and later X Premium, began to support paid blue checkmarks in November of 2022, the visual identification of verification became deceptive. Think Fake Eli Lilly accounts posting about free insulin and impersonation accounts for Elon Musk himself.

This week’s move by Meta echoes changes at Twitter/X, despite the significant evidence that it leaves information quality and user experience in a worse place than before. Despite what Facebook says, all this tells anyone is that you paid.

Keep ReadingShow less