Skip to content
Search

Latest Stories

Top Stories

How artificial intelligence can be used to reduce polarization

Rozado is an associate professor of computer science at Te Pūkenga - The New Zealand Institute of Skills and Technology. He is also a faculty fellow at Heterodox Academy's Center for Academic Pluralism. McIntosh is the author of “Developmental Politics” (Paragon House 2020) and coauthor of “Conscious Leadership” (Penguin 2020). He is cofounder and lead philosopher at the Institute for Cultural Evolution.

Amid countless reports of how social media is exacerbating political polarization, many commentators worry that artificial intelligence will come to have a similarly corrosive effect on American culture. In response to these concerns, an innovative new tool has been developed to leverage AI technology to reduce polarization: Meet DepolarizingGPT, a political chatbot designed to tackle polarization head on.

Unlike other AI models, DepolarizingGPT is focused specifically on political issues. It provides three responses to every prompt: one from a left-wing perspective, one from a right-wing perspective and a third response from a depolarizing or “integrating” viewpoint. We created this three-answer model to ameliorate political and cultural polarization by demonstrating a developmental approach to politics, one that synthesizes responsible perspectives from across the political spectrum.


The idea is to combine three models — LeftwingGPT, RightwingGPT and DepolarizingGPT — into one single system. Users are exposed to three perspectives simultaneously, moving beyond the echo chambers that often reinforce entrenched biases. Existing AIs, such as OpenAI's ChatGPT, claim to be unbiased, but this claim has been shown to be false. So rather than denying its bias, which always exists, DepolarizingGPT's three-answer model offers responsible perspectives from left, right and integrated positions. The goal is to foster a more diverse, nuanced understanding of differing political views, and to reduce the tendency to vilify the other side.

DepolarizingGPT's left-wing responses have been fine-tuned (within the fair-use provisions of copyright law) by using content from left-leaning publications such as The Atlantic, The New Yorker and The New Republic, and from numerous left-wing writers such as Bill McKibben and Joseph Stiglitz. The model's right-wing responses have been fine-tuned with content from publications such as National Review, The American Conservative and City Journal, as well as from numerous right-leaning writers such as Roger Scruton and Thomas Sowell. And the model's depolarizing responses have been fine-tuned with content from the inclusive political philosophy of the Institute for Cultural Evolution.

The model's depolarizing answers attempt to transcend centrism and avoid simply splitting the difference between left and right. When at their best, these depolarizing responses demonstrate a kind of "higher ground" that goes beyond the familiar left-right political spectrum. Admittedly, however, some of the model's depolarizing responses inevitably fall short of this goal.

This project stems from David Rozado’s academic research, which revealed the inherent left-wing political bias of ChatGPT. To address this issue, Rozado created an experimental AI model with an opposite kind of right-wing bias. His work attracted attention from The New York Times, Wired and Fox News. The intent in demonstrating the political bias of supposedly neutral AIs was to help prevent artificial intelligence from becoming just another front in the culture war.

After reading about Rozado's work, Steve McIntosh proposed that the two team up to create an AI model that could actually help reduce political polarization. Since cofounding the Institute for Cultural Evolution in 2013, McIntosh has been working to overcome hyperpolarization by showing how America can grow into a better version of itself. His institute offers a platform of "win-win-win" policy proposals, which integrate the values of all three major American worldviews: progressive, modernist and traditional. And this same method of integrating values used to build the institute's policy platform is now programmed into DepolarizingGPT's three-answer political chatbot.

Within conventional politics, people are often faced with win-lose propositions. But by focusing on the bedrock values that most people already share, it becomes possible to discover something closer to a win-win-win solution, even if such a solution does not completely satisfy all parties. This win-win-win strategy aims to accommodate the concerns of all sides, not just to get its way, but to make authentic progress through cultural evolution.

By synthesizing values from across the political spectrum, artificial intelligence promises to help American society grow out of its currently dysfunctional political condition.

Read More

When Good Intentions Kill Cures: A Warning on AI Regulation

Kevin Frazier warns that one-size-fits-all AI laws risk stifling innovation. Learn the 7 “sins” policymakers must avoid to protect progress.

Getty Images, Aitor Diago

When Good Intentions Kill Cures: A Warning on AI Regulation

Imagine it is 2028. A start-up in St. Louis trains an AI model that can spot pancreatic cancer six months earlier than the best radiologists, buying patients precious time that medicine has never been able to give them. But the model never leaves the lab. Why? Because a well-intentioned, technology-neutral state statute drafted in 2025 forces every “automated decision system” to undergo a one-size-fits-all bias audit, to be repeated annually, and to be performed only by outside experts who—three years in—still do not exist in sufficient numbers. While regulators scramble, the company’s venture funding dries up, the founders decamp to Singapore, and thousands of Americans are deprived of an innovation that would have saved their lives.

That grim vignette is fictional—so far. But it is the predictable destination of the seven “deadly sins” that already haunt our AI policy debates. Reactive politicians are at risk of passing laws that fly in the face of what qualifies as good policy for emerging technologies.

Keep ReadingShow less
Why Journalists Must Stand Firm in the Face of Threats to Democracy
a cup of coffee and a pair of glasses on a newspaper
Photo by Ashni on Unsplash

Why Journalists Must Stand Firm in the Face of Threats to Democracy

The United States is living through a moment of profound democratic vulnerability. I believe the Trump administration has worked in ways that weaken trust in our institutions, including one of democracy’s most essential pillars: a free and independent press. In my view, these are not abstract risks but deliberate attempts to discredit truth-telling. That is why, now more than ever, I think journalists must recommit themselves to their core duty of telling the truth, holding power to account, and giving voice to the people.

As journalists, I believe we do not exist to serve those in office. Our loyalty should be to the public, to the people who trust us with their stories, not to officials who often seek to mold the press to favor their agenda. To me, abandoning that principle would be to betray not just our profession but democracy itself.

Keep ReadingShow less
Fighting the Liar’s Dividend: A Toolkit for Truth in the Digital Age

In 2023, the RAND Corporation released a study on a phenomenon known as "Truth Decay," where facts become blurred with opinion and spin. But now, people are beginning to doubt everything, including authentic material.

Getty Images, VioletaStoimenova

Fighting the Liar’s Dividend: A Toolkit for Truth in the Digital Age

The Stakes: When Nothing Can Be Trusted

Two weeks before the 2024 election, a fake robocall mimicking President Biden's voice urged voters to skip the New Hampshire primary. According to AP News, it was an instance of AI-enabled election interference. Within hours, thousands had shared it. Each fake like this erodes confidence in the very possibility of knowing what is real.

The RAND Corporation refers to this phenomenon as "Truth Decay," where facts become blurred with opinion and spin. Its 2023 research warns that Truth Decay threatens U.S. national security by weakening military readiness and eroding credibility with allies. But the deeper crisis isn't that people believe every fake—it's that they doubt everything, including authentic material.

Keep ReadingShow less
From TikTok to Telehealth: 3 Ways Medicine Must Evolve to Reach Gen Z
person wearing lavatory gown with green stethoscope on neck using phone while standing

From TikTok to Telehealth: 3 Ways Medicine Must Evolve to Reach Gen Z

Ask people how much they expect to change over the next 10 years, and most will say “not much.” Ask them how much they’ve changed in the past decade, and the answer flips. Regardless of age, the past always feels more transformative than the future.

This blind spot has a name: the end-of-history illusion. The result is a persistent illusion that life, and the values and behaviors that shape it, will remain unchanged.

Keep ReadingShow less