Skip to content
Search

Latest Stories

Top Stories

How artificial intelligence can be used to reduce polarization

Opinion

Rozado is an associate professor of computer science at Te Pūkenga - The New Zealand Institute of Skills and Technology. He is also a faculty fellow at Heterodox Academy's Center for Academic Pluralism. McIntosh is the author of “Developmental Politics” (Paragon House 2020) and coauthor of “Conscious Leadership” (Penguin 2020). He is cofounder and lead philosopher at the Institute for Cultural Evolution.

Amid countless reports of how social media is exacerbating political polarization, many commentators worry that artificial intelligence will come to have a similarly corrosive effect on American culture. In response to these concerns, an innovative new tool has been developed to leverage AI technology to reduce polarization: Meet DepolarizingGPT, a political chatbot designed to tackle polarization head on.

Unlike other AI models, DepolarizingGPT is focused specifically on political issues. It provides three responses to every prompt: one from a left-wing perspective, one from a right-wing perspective and a third response from a depolarizing or “integrating” viewpoint. We created this three-answer model to ameliorate political and cultural polarization by demonstrating a developmental approach to politics, one that synthesizes responsible perspectives from across the political spectrum.


The idea is to combine three models — LeftwingGPT, RightwingGPT and DepolarizingGPT — into one single system. Users are exposed to three perspectives simultaneously, moving beyond the echo chambers that often reinforce entrenched biases. Existing AIs, such as OpenAI's ChatGPT, claim to be unbiased, but this claim has been shown to be false. So rather than denying its bias, which always exists, DepolarizingGPT's three-answer model offers responsible perspectives from left, right and integrated positions. The goal is to foster a more diverse, nuanced understanding of differing political views, and to reduce the tendency to vilify the other side.

DepolarizingGPT's left-wing responses have been fine-tuned (within the fair-use provisions of copyright law) by using content from left-leaning publications such as The Atlantic, The New Yorker and The New Republic, and from numerous left-wing writers such as Bill McKibben and Joseph Stiglitz. The model's right-wing responses have been fine-tuned with content from publications such as National Review, The American Conservative and City Journal, as well as from numerous right-leaning writers such as Roger Scruton and Thomas Sowell. And the model's depolarizing responses have been fine-tuned with content from the inclusive political philosophy of the Institute for Cultural Evolution.

The model's depolarizing answers attempt to transcend centrism and avoid simply splitting the difference between left and right. When at their best, these depolarizing responses demonstrate a kind of "higher ground" that goes beyond the familiar left-right political spectrum. Admittedly, however, some of the model's depolarizing responses inevitably fall short of this goal.

This project stems from David Rozado’s academic research, which revealed the inherent left-wing political bias of ChatGPT. To address this issue, Rozado created an experimental AI model with an opposite kind of right-wing bias. His work attracted attention from The New York Times, Wired and Fox News. The intent in demonstrating the political bias of supposedly neutral AIs was to help prevent artificial intelligence from becoming just another front in the culture war.

After reading about Rozado's work, Steve McIntosh proposed that the two team up to create an AI model that could actually help reduce political polarization. Since cofounding the Institute for Cultural Evolution in 2013, McIntosh has been working to overcome hyperpolarization by showing how America can grow into a better version of itself. His institute offers a platform of "win-win-win" policy proposals, which integrate the values of all three major American worldviews: progressive, modernist and traditional. And this same method of integrating values used to build the institute's policy platform is now programmed into DepolarizingGPT's three-answer political chatbot.

Within conventional politics, people are often faced with win-lose propositions. But by focusing on the bedrock values that most people already share, it becomes possible to discover something closer to a win-win-win solution, even if such a solution does not completely satisfy all parties. This win-win-win strategy aims to accommodate the concerns of all sides, not just to get its way, but to make authentic progress through cultural evolution.

By synthesizing values from across the political spectrum, artificial intelligence promises to help American society grow out of its currently dysfunctional political condition.

Read More

The Manosphere Is Bad for Boys and Worse for Democracy
a skeleton sitting at a desk with a laptop and keyboard
Photo by Growtika on Unsplash

The Manosphere Is Bad for Boys and Worse for Democracy

15-year-old Owen Cooper made history to become the youngest male to win an Emmy Award. In the Netflix series Adolescence, Owen plays the role of a 13-year-old schoolboy who is arrested after the murder of a girl in his school. As we follow the events leading up to the crime, the award-winning series forces us to confront legitimate insecurities that many teenage boys face, from lack of physical prowess to emotional disconnection from their fathers. It also exposes how easily young men, seeking comfort in their computers, can be pulled into online spaces that normalize misogyny and rage; a pipeline enabled by a failure of tech policy.

At the center of this danger lies the manosphere: a global network of influencers whose words can radicalize young men and channel their frustrations into violence. But this is more than a social crisis affecting some young men. It is a growing threat to the democratic values of equality and tolerance that keep us all safe.

Keep ReadingShow less
Your Data Isn’t Yours: How Social Media Platforms Profit From Your Digital Identity

Discover how your personal data is tracked, sold, and used to control your online experience—and how to reclaim your digital rights.

Getty Images, Sorapop

Your Data Isn’t Yours: How Social Media Platforms Profit From Your Digital Identity

Social media users and digital consumers willingly present a detailed trail of personal data in the pursuit of searching, watching, and engaging on as many platforms as possible. Signing up and signing on is made to be as easy as possible. Most people know on some level that they are giving up more data than they should , but with hopes that it won’t be used surreptitiously by scammers, and certainly not for surveillance of any sort.

However, in his book, "Means of Control," Byron Tau shockingly reveals how much of our digital data is tracked, packaged, and sold—not by scammers but by the brands and organizations we know and trust. As technology has deeply permeated our lives, we have willingly handed over our entire digital identity. Every app we download, every document we create, every social media site we join, there are terms and conditions that none of us ever bother to read.

That means our behaviors, content, and assets are given up to corporations that profit from them in more ways than the average person realizes. The very data and the reuse of it are controlling our lives, our freedom, and our well-being.

Let’s think about all this in the context of a social media site. It is a place where you interact with friends, post family photos, and highlight your art and videos. You may even share a perspective on current events. These very social media platforms don’t just own your content. They can use your behavior and your content to target you. They also sell your data to others, and profit massively off of YOU, their customer.

Keep ReadingShow less
A gavel next to a computer chip with the words "AI" on it.

Often, AI policy debates focus on speculative risks rather than real-world impacts. Kevin Frazier argues that lawmakers and academics must shift their focus from sci-fi scenarios to practical challenges.

Getty Images, Just_Super

Why Academic Debates About AI Mislead Lawmakers—and the Public

Picture this: A congressional hearing on “AI policy” makes the evening news. A senator gravely asks whether artificial intelligence might one day “wake up” and take over the world. Cameras flash. Headlines declare: “Lawmakers Confront the Coming Robot Threat.” Meanwhile, outside the Beltway on main streets across the country, everyday Americans worry about whether AI tools will replace them on factory floors, in call centers, or even in classrooms. Those bread-and-butter concerns—job displacement, worker retraining, and community instability—deserve placement at the top of the agenda for policymakers. Yet legislatures too often get distracted, following academic debates that may intrigue scholars but fail to address the challenges that most directly affect people’s lives.

That misalignment is no coincidence. Academic discourse does not merely fill journals; it actively shapes the policy agenda and popular conceptions of AI. Too many scholars dwell on speculative, even trivial, hypotheticals. They debate whether large language models should be treated as co-authors on scientific papers or whether AI could ever develop consciousness. These conversations filter into the media, morph into lawmaker talking points, and eventually dominate legislative hearings. The result is a political environment where sci-fi scenarios crowd out the issues most relevant to ordinary people—like how to safeguard workers, encourage innovation, and ensure fairness in critical industries. When lawmakers turn to scholars for guidance, they often encounter lofty speculation rather than clear-eyed analysis of how AI is already reshaping specific sectors.

Keep ReadingShow less
A person looking at social media app icons on a phone
A different take on social media and democracy
Matt Cardy/Getty Images

Outrage Over Accuracy: What the Los Angeles Protests Teach About Democracy Online

In Los Angeles this summer, immigration raids sparked days of street protests and a heavy government response — including curfews and the deployment of National Guard troops. But alongside the demonstrations came another, quieter battle: the fight over truth. Old protest videos resurfaced online as if they were new, AI-generated clips blurred the line between fact and fiction, and conspiracy theories about “paid actors” flooded social media feeds.

What played out in Los Angeles was not unique. It is the same dynamic Maria Ressa warned about when she accepted the Nobel Peace Prize in 2021. She described disinformation as an “invisible atomic bomb” — a destabilizing force that, like the bomb of 1945, demands new rules and institutions to contain its damage. After Hiroshima and Nagasaki, the world created the United Nations and a framework of international treaties to prevent nuclear catastrophe. Ressa argues that democracy faces a similar moment now: just as we built global safeguards for atomic power, we must now create a digital rule of law to safeguard the information systems that shape civic life.

Keep ReadingShow less