Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The indecent nature of social media

Opinion

Social media icons
Chesnot/Getty Images

O’Rourke, a senior at Emory University studying economics and philosophy, is an intern at the Bridge Alliance, which operates The Fulcrum.

Only about a decade ago, social media use among adults in the United States became the norm — garnering the attention of more than 50 percent of the population for the first time since the technology was introduced. Since then, the effects of social media cannot be overstated. Indeed, social media and digital technology use has become the dominant arena of social engagement, evidenced by a recent report finding that the average American spends nearly five hours per day on their phone, and over three hours per day on social media.

As digital technology and social media rose in prominence, a legitimate debate arose over whether they were a force for good or evil. Some argue that social media is an excellent tool for connecting people across previously insurmountable boundaries, empowering the voices of the marginalized and democratizing access to information. Yet others believe that the rise of digital technology increases tribalism among groups and promotes isolation among individuals. Though the resolution to this debate was unsettled for a long while, the jury has reached its verdict — and the outcome should not be a surprise.


While it is difficult to capture all the harms that social media has caused on both an individual and societal level, Jonathan Haidt, a renowned social psychologist from New York University, has spent the past several years attempting to do so. His research has found that social media platforms prey on individuals’ — particularly adolescent girls’ — self-consciousness by “put[ting] the size of their friend groups on public display, and subject[ing] their physical appearance to the hard metrics of likes and comment counts.” Haidt goes on to document the strikingly parallel trajectories of social media popularity within this group and mental health disorders, finding skyrocketing rates of depression and self-harm from 2010 to 2014, the same time social media use became the norm among high-school-age girls.

The harms of social media are not merely reserved for teenagers, but extend to the health of our liberal democracy and the strength of our social fabric as well. Because individuals select whom they follow, social media platforms become fertile ground for echo chambers. This siloing of like-minded individuals also poses an epistemic problem: If each group has its own, isolated claim to truth — unable to be checked or verified by those who disagree — then it becomes nearly impossible to distinguish truth from falsehood.

Unsurprisingly, this also creates an environment of toxic partisanship and polarization, in which each chamber produces increasingly inflammatory content — later used as fodder for even more inflammatory content for the opposition. And when this is coupled with plummeting trust in American institutions, it is clear how social media tears at the seams of our social fabric.

So why has social media become such a dark place?

In essence, these myriad negative effects are baked into the very structure of these platforms. That is, social media’s multilayered incentive framework — on both the individual and business levels — produces outcomes that preclude platforms from fulfilling their purpose “to make the world more open and connected.”

On the individual level, social media fundamentally changes the way we communicate with each other, “turn[ing] so much [of our] communication into a public performance,” as put by Haidt. Instead of communicating by way of one-on-one conversation, platform-based communications are usually crafted to win the favor of the group(s) to which we belong. And because we operate in a social and political climate in which these online communities are often pitted against each other, this form of communication regularly rewards outrage and moral grandstanding — winning support within our tribe by castigating the opposition. Needless to say, this creates an environment that is inhospitable to open inquiry and honest engagement.

While the incentives among individuals participating in like-minded social groups often reward suboptimal content, so too do the business models of most social media platforms. Because the revenue for these platforms comes from advertisers, their primary business interest is to keep users engaged for as long as possible. This means the content that produces the most engagement — in the form of clicks, likes, pageviews, etc. — is the content that will be promoted most heavily by the platforms’ algorithms. But as we have seen, the most engaging content is typically that which has the strongest appeal to the tribe, not that which is socially optimal.

So where does this all leave us? Are there reforms that can salvage this technology, or is social media a lost cause?

The answer to these questions are difficult, because each problem that social media creates likely warrants its own solution. To reduce the harm caused to teenage girls, for example, platforms could raise or more stringently enforce their minimum age requirement (which is only 13). But more fundamentally, the corrective to many of these social ills comes through restructuring the individual-level incentive framework. If online communities were to reward and engage with socially beneficial content, then the platforms’ engagement-optimizing algorithms could actually be used for good.

Many have called for the platforms to do this work themselves, adopting policies that flag and obstruct content that they deem suspect. And while this strategy seems good in theory, it has failed in practice. Consider, for example, Facebook’s policy that prevented users from sharing a New York Post article suggesting “the coronavirus may have leaked from a lab,” a once taboo hypothesis that is now completely acceptable. While Facebook may have been trying to limit the spread of misinformation, its employees demonstrated they are not equipped to determine what content is socially beneficial.

Because it can be so difficult to parse socially beneficial content from tribal content, commentaries like Jeff Garson’s are particularly useful. Garson believes that a better functioning incentive structure is built on interpersonal decency, not tribal appeal. If values such as respect, understanding and appreciation were paramount on our platforms, socially beneficial content would flourish.

While creating a platform that incentivizes users to organically promote better content seems like a tall order, it is surely not impossible. As Jonathan Rauch illuminates in his seminal “ The Constitution of Knowledge,” Wikipedia serves as a great example. The online encyclopedia — boasting more than 55 million pages of content — has successfully created an incentive structure that rewards truth. Although any user can edit a Wikipedia page, errors are swiftly corrected, and accurate information prevails. Surely, social media is not meant to function like an encyclopedia, and it remains to be seen if platforms and participants have the will to make much-needed changes. But Wikipedia demonstrates that socially beneficial incentive structures can be created and maintained among large internet communities.

To overcome the plethora of negative effects created by social media — from increased anxiety and depression to the erosion of our institutions of liberal democracy — we must amend the incentive structure, rewarding that which is truly good, not merely that which makes us feel good.


Read More

The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less