Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The Manosphere Is Bad for Boys and Worse for Democracy

Opinion

The Manosphere Is Bad for Boys and Worse for Democracy
a skeleton sitting at a desk with a laptop and keyboard
Photo by Growtika on Unsplash

15-year-old Owen Cooper made history to become the youngest male to win an Emmy Award. In the Netflix series Adolescence, Owen plays the role of a 13-year-old schoolboy who is arrested after the murder of a girl in his school. As we follow the events leading up to the crime, the award-winning series forces us to confront legitimate insecurities that many teenage boys face, from lack of physical prowess to emotional disconnection from their fathers. It also exposes how easily young men, seeking comfort in their computers, can be pulled into online spaces that normalize misogyny and rage; a pipeline enabled by a failure of tech policy.

At the center of this danger lies the manosphere: a global network of influencers whose words can radicalize young men and channel their frustrations into violence. But this is more than a social crisis affecting some young men. It is a growing threat to the democratic values of equality and tolerance that keep us all safe.


Like the crime series, we have already seen where this can lead in real life. Recently, French authorities charged a teenager connected to the incel subculture with terrorism, marking the country’s first such case of gender-based violence, a case built on the young man’s conspiracy to harm women. This should not be seen as an isolated incident, but as a symptom of a broader ideological problem.

While toxic for the young men it ensnares, the manosphere's deeper danger lies in its corrosion of the foundational values of democracy. These male-dominated online spaces reinforce rigid, traditional masculine norms. On the surface, we may see young men searching for identity and belonging, but beneath churns a murky undercurrent of pseudoscience and a contempt for gender equality.

Misogynistic messages poison the minds of young men, weaponizing their frustrations. They offer a seductively simple answer to complex feelings of loneliness: blame women. In 2014, Elliot Rodger fatally stabbed three people before going on a shooting spree to kill three more and injure 14 others. In his manifesto, he wrote, “All of my suffering in this world has been at the hands of humanity, particularly women,” blaming even his childhood crush for his violent hatred of women.

This blame can then be refined into a hatred for the very institutions, such as education, government, and the media, that are pillars of a functioning civil society, breeding a generation that rejects the pluralistic ideals necessary for democracy to thrive. Take, for example, the rise of far-right and neo-Nazi groups in Sweden that are attracting a new generation of young men who have lost faith in democracy. They share racist memes and violent videos to attract potential recruits, including boys as young as 10, on mainstream platforms like TikTok, before moving to more private, less regulated spaces.

The manosphere is a pathway to violence where online hate is turned into real-world violence. The French case is a direct example, but the trail of blood stretches back to attacks in Toronto, Isla Vista, and beyond. These digital communities don’t just vent; they strategize. They provide twisted justification and promote a culture of martyrdom that glorifies retribution. The leap from dehumanizing rhetoric on a forum to physical acts of terror is shorter than we imagine.

The manosphere is not neutral and can serve as a pipeline to extremist ideologies. While its entry point is often resentment of women, its logic inevitably expands. The same frameworks of hatred used against women are easily applied to minorities, migrants, and LGBTQ+ individuals. The Center on Extremism (COE) explains that the fear that women’s equality undermines men’s status is just a step away from seeing all demands for equality as threats to white male dominance. This exposes the symbiosis between misogyny and white supremacy, both connected by a deep-seated loathing of women.

And this matters because the status of women has long been a litmus test for the health of democracy. A recent study by the Georgetown Institute for Women, Peace, and Security found that women’s equality is strongly correlated with election integrity, freedom of association and assembly, and checks on executive power. In other words, when women’s rights erode, so too do the foundations of democratic governance.

Some argue that we might be overestimating the influence of the manosphere, as many of its users selectively choose what resonates with them and disregard what doesn’t. Indeed, not every frustrated young man online is a criminal. Not all criticism of society is extremist. Yet history shows that even small, seemingly fringe groups can eventually reshape society in harmful, authoritarian, or destructive ways. When Adolf Hitler joined the German Workers’ Party, which later became the Nazi Party, he was the 55th member. We do not have to wait for a movement to gain a large following to see the potential trajectory it may take.

Ignoring the manosphere is a luxury we cannot afford. Combating it requires a proactive, multi-sectoral effort that must include a robust tech policy framework. Tech companies must move beyond reactive content moderation and proactively redesign their algorithms to de-amplify hateful content, rather than recommending it. They must consistently enforce their own terms of service against organized hate and misogyny, subjecting their enforcement to independent audits for transparency. Furthermore, they should invest in redirect initiatives that algorithmically offer resources for mental health and positive mentorship to users searching for harmful keywords. Governments must move beyond a hands-off approach. They should legislate mandatory safety-by-design standards for platforms, compelling them to conduct and publish risk assessments on how platforms might facilitate radicalization. Legislators must also strengthen laws that hold platforms accountable for knowingly profiting from the algorithmic amplification of extremist content.

This must be coupled with integrating digital literacy into education to build resilience against manipulation and, most importantly, offering young men positive models of manhood built on respect rather than hatred and blame. Our collective safety and the future of our democratic values may just depend on it.

Kevin Liverpool is a Public Voices Fellow on Prevention of Child Sexual Abuse with The OpEd Project. He works as a partnerships specialist with No Means No Worldwide, an international nonprofit on a mission to end sexual violence against women and children globally.



Read More

The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less