Skip to content
Search

Latest Stories

Follow Us:
Top Stories

More than 100 groups demand social media platforms do more to fight election disinformation

online disinformation
tommy/Getty Images

Two days after Elon Musk said he would lift Twitter’s permanent ban on Donald Trump if his acquisition goes through, more than 100 organizations called on social media companies to combat disinformation during the 2022 midterm elections.

In a letter sent to the CEOs of the biggest social media companies on Thursday, the leaders of civil rights and democracy reform groups requested the platforms take a series of steps, including “introducing friction to reduce the spread and amplification of disinformation, consistent enforcement of robust civic integrity policies; and greater transparency into business models that allow disinformation to spread.”

The letter – whose signatories include Common Cause, Leadership Conference on Civil and Human Rights, Campaign Legal Center, the League of Women Voters and the NAACP – praised the companies for instituting plans to combat disinformation while demanding the platforms do more and do it consistently.


Meanwhile, Democratic Sen. Michael Bennet intends to offer a bill Thursday to establish a federal commission to regulate tech companies. According to The Washington Post, the bill would give the government authority to review platforms’ algorithms and to create rules governing content moderation.

“We need an agency with expertise to have a thoughtful approach here,” Bennet told the Post.

But for now, the companies receiving the letter (Facebook, Google, TikTok, Snap, YouTube, Twitter and Instagram) make their own rules. And in the wake of the Jan. 6, 2021, Capitol insurrection fueled by unfounded claims that Joe Biden stole the presidential election from Donald Trump, the groups behind the letter fear further damage to democratic institutions.

“Disinformation related to the 2020 election has not gone away but has only continued to proliferate. In fact, according to recent polls, more than 40 percent of Americans still do not believe President Biden legitimately won the 2020 presidential election. Further, fewer Americans have confidence in elections today than they did in the immediate aftermath of the January 6th insurrection,” they wrote.

The letter lays out eight steps the companies can take to stop the spread of disinformation:

  • Limit the opportunities for users to interact with election disinformation, going beyond the warning labels that have been introduced.
  • Devote more resources to blocking disinformation that targets people who do not speak English.
  • Consistently enforce “civic integrity policies” during election and non-election years.
  • Apply those policies to live content.
  • Prioritize efforts to stop the spread of unfounded voter fraud claims, known as the “Big Lie.”
  • Increase fact-checking of election content, including political advertisements and statements from public officials.
  • Allow outside researchers and watchdogs access to social media data.
  • Increase transparency of internal policies, political ads and algorithms.

“The last presidential election, and the lies that continued to flourish in its wake on social media, demonstrated the dire threat that election disinformation poses to our democracy,” said Yosef Getachew, director of the media and democracy program for Common Cause. “Social media companies must learn from what was unleashed on their platforms in 2020 and helped foster the lies that led a violent, racist mob to storm the Capitol on January 6th. The companies must take concrete steps to prepare their platforms for the coming onslaught of disinformation in the midterm elections. These social media giants must implement meaningful reforms to prevent and reduce the spread of election disinformation while safeguarding our democracy and protecting public safety.”


Read More

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal
Getty Images, Kmatta

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal

Background

The Health Insurance Portability and Accountability Act (HIPAA) was enacted in 1996 to protect sensitive health information from being disclosed without patients’ consent. Under this act, a patient’s privacy is safeguarded through the enforcement of strict standards on managing, transmitting, and storing health information.

Keep ReadingShow less
Two people looking at screens.

A case for optimism, risk-taking, and policy experimentation in the age of AI—and why pessimism threatens technological progress.

Getty Images, Andriy Onufriyenko

In Defense of AI Optimism

Society needs people to take risks. Entrepreneurs who bet on themselves create new jobs. Institutions that gamble with new processes find out best to integrate advances into modern life. Regulators who accept potential backlash by launching policy experiments give us a chance to devise laws that are based on evidence, not fear.

The need for risk taking is all the more important when society is presented with new technologies. When new tech arrives on the scene, defense of the status quo is the easier path--individually, institutionally, and societally. We are all predisposed to think that the calamities, ailments, and flaws we experience today--as bad as they may be--are preferable to the unknowns tied to tomorrow.

Keep ReadingShow less
Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less