Skip to content
Search

Latest Stories

Follow Us:
Top Stories

New platforms help overcome biased news reporting

news app
Tero Vesalainen/Getty Images

With so many media outlets in the world today, it is becoming increasingly difficult to find unbiased news, if not distinguish fact from fiction. News consumers’ reliance on social media, algorithms and preferred media outlets often ends up reinforcing opinions rather than helping develop independent, unbiased views.

And while 78 percent of Americans said that “it is never acceptable for a news organization to favor one political party over others when reporting the news,” according to the Pew Research Center, bias has seeped into the majority of news outlets, making it increasingly difficult for consumers to receive news that is nonpartisan.

But there are a growing number of platforms that want to help people understand media bias and improve their news consumption habits.


Take, for example, Kamy Akhavan, a former CEO of ProCon who has dedicated much of his career to promoting civil communication and improving civic education. Akhavan formed ProCon to reflect his passions while delivering non-biased information through “beneficial confusion,” a technique that offered opposing viewpoints on a wide range of political, social and policy issues so readers could “engage in evaluative thinking to formulate their own views,” he explained.

“The goal is not to persuade but rather to educate,” he said. “The goal is to let the reader come to their own conclusions and judge for themselves what they want to do with that information.”

He also stressed the importance of education when discerning between biases, claiming, “The problem associated with a lack of media literacy are really an outgrowth of the fact that we have 24/7 news cycles and thousands of media sources, including citizen journalism, which may be well intentioned, but oftentimes isn't very good.”

AllSides, led by CEO John Gable, has followed a similar approach, hoping to expose individuals to different ideas and information from all sides of the political spectrum to provide a fuller picture. They have also created the AllSides Media Bias Rating to help readers understand a news platform’s slant.

Gable focuses on helping people who want to solve problems, because they are ones who will bring about change. “Enable them to see the different sides, enable them to understand what's really going on, enable them to share and be open about it so that they aren't overly attached and have to hide their true opinions behind some kind of wall,” he said.

Among the media platforms that earned “center” ratings on the AllSides media bias chart:

  • Axios
  • BBC
  • Christian Science Monitor
  • Reuters
  • Wall Street Journal news (the WSJ opinion section leans right)

(The Fulcrum also earned a “center” rating from AllSides.)

AllSides has also become dedicated to ending polarization. The platform has been focused on fighting “filter bubbles,” a phenomenon that occurs when people are only exposed to ideas, news and conversations that align with their own beliefs. AllSides hopes to combat polarization by equipping individuals with knowledge and empathy to engage in productive dialogue through the dissemination of a broad range of perspectives.

Gable added that along with information that conforms to a person’s beliefs, “you need the difference, the arguments and the unexpected ideas” to overcome divides and hatred for the opposing side.

He is hopeful for the future of news media—an unusually optimistic outlook given the continuing downward trend regarding Americans’ trust in the media. Gable believes the key to turning around the news industry is overcoming misinformation: “The way to deal with misinformation is not by trying to control it, but by debunking it in a world of open ideas and data.”

Large media corporations like Yahoo News and even Facebook have begun to incorporate technology that discourages misinformation and questions the credibility of their news sources. For example, Yahoo News has just acquired The Factual, a company focused on rating the credibility of news sources through different algorithms and technologies.

Other sources for balanced perspectives and media ratings include:

  • NewsGuard, a browser extension that provides ratings on media platforms.
  • The Flipside, a newsletter that shares perspectives from the right, middle and center on some of the biggest issues of the day.

While ProCon, AllSides and others like them are designed to help news consumers, Akhavan would prefer to see solutions that attack the issue earlier, such as developing critical thinking skills in K-12 education. He suggests that children should be taught to “to question the accuracy of the information to question whether or not there's multiple perspectives in a particular presentation or single perspectives, to see if any of the languages that tended to be persuasive or whether the language is intended to be informative.”

This skill, coupled with a media literacy proficiency, would better equip Americans to recognize biases and arm them with the tools to overcome deceptions from a single, possibly misinformed, media source. Instead, they would be more likely to look for different viewpoints, question the credibility of their sources, and formulate an opinion for themselves.


Read More

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less
Meta Undermining Trust but Verify through Paid Links
Facebook launches voting resource tool
Facebook launches voting resource tool

Meta Undermining Trust but Verify through Paid Links

Facebook is testing limits on shared external links, which would become a paid feature through their Meta Verified program, which costs $14.99 per month.

This change solidifies that verification badges are now meaningless signifiers. Yet it wasn’t always so; the verified internet was built to support participation and trust. Beginning with Twitter’s verification program launched in 2009, a checkmark next to a username indicated that an account had been verified to represent a notable person or official account for a business. We could believe that an elected official or a brand name was who they said they were online. When Twitter Blue, and later X Premium, began to support paid blue checkmarks in November of 2022, the visual identification of verification became deceptive. Think Fake Eli Lilly accounts posting about free insulin and impersonation accounts for Elon Musk himself.

This week’s move by Meta echoes changes at Twitter/X, despite the significant evidence that it leaves information quality and user experience in a worse place than before. Despite what Facebook says, all this tells anyone is that you paid.

Keep ReadingShow less
artificial intelligence

Rather than blame AI for young Americans struggling to find work, we need to build: build new educational institutions, new retraining and upskilling programs, and, most importantly, new firms.

Surasak Suwanmake/Getty Images

Blame AI or Build With AI? Only One Approach Creates Jobs

We’re failing young Americans. Many of them are struggling to find work. Unemployment among 16- to 24-year-olds topped 10.5% in August. Even among those who do find a job, many of them are settling for lower-paying roles. More than 50% of college grads are underemployed. To make matters worse, the path forward to a more stable, lucrative career is seemingly up in the air. High school grads in their twenties find jobs at nearly the same rate as those with four-year degrees.

We have two options: blame or build. The first involves blaming AI, as if this new technology is entirely to blame for the current economic malaise facing Gen Z. This course of action involves slowing or even stopping AI adoption. For example, there’s so-called robot taxes. The thinking goes that by placing financial penalties on firms that lean into AI, there will be more roles left to Gen Z and workers in general. Then there’s the idea of banning or limiting the use of AI in hiring and firing decisions. Applicants who have struggled to find work suggest that increased use of AI may be partially at fault. Others have called for providing workers with a greater say in whether and to what extent their firm uses AI. This may help firms find ways to integrate AI in a way that augments workers rather than replace them.

Keep ReadingShow less