Skip to content
Search

Latest Stories

Follow Us:
Top Stories

More than 100 groups demand social media platforms do more to fight election disinformation

online disinformation
tommy/Getty Images

Two days after Elon Musk said he would lift Twitter’s permanent ban on Donald Trump if his acquisition goes through, more than 100 organizations called on social media companies to combat disinformation during the 2022 midterm elections.

In a letter sent to the CEOs of the biggest social media companies on Thursday, the leaders of civil rights and democracy reform groups requested the platforms take a series of steps, including “introducing friction to reduce the spread and amplification of disinformation, consistent enforcement of robust civic integrity policies; and greater transparency into business models that allow disinformation to spread.”

The letter – whose signatories include Common Cause, Leadership Conference on Civil and Human Rights, Campaign Legal Center, the League of Women Voters and the NAACP – praised the companies for instituting plans to combat disinformation while demanding the platforms do more and do it consistently.


Meanwhile, Democratic Sen. Michael Bennet intends to offer a bill Thursday to establish a federal commission to regulate tech companies. According to The Washington Post, the bill would give the government authority to review platforms’ algorithms and to create rules governing content moderation.

“We need an agency with expertise to have a thoughtful approach here,” Bennet told the Post.

But for now, the companies receiving the letter (Facebook, Google, TikTok, Snap, YouTube, Twitter and Instagram) make their own rules. And in the wake of the Jan. 6, 2021, Capitol insurrection fueled by unfounded claims that Joe Biden stole the presidential election from Donald Trump, the groups behind the letter fear further damage to democratic institutions.

“Disinformation related to the 2020 election has not gone away but has only continued to proliferate. In fact, according to recent polls, more than 40 percent of Americans still do not believe President Biden legitimately won the 2020 presidential election. Further, fewer Americans have confidence in elections today than they did in the immediate aftermath of the January 6th insurrection,” they wrote.

The letter lays out eight steps the companies can take to stop the spread of disinformation:

  • Limit the opportunities for users to interact with election disinformation, going beyond the warning labels that have been introduced.
  • Devote more resources to blocking disinformation that targets people who do not speak English.
  • Consistently enforce “civic integrity policies” during election and non-election years.
  • Apply those policies to live content.
  • Prioritize efforts to stop the spread of unfounded voter fraud claims, known as the “Big Lie.”
  • Increase fact-checking of election content, including political advertisements and statements from public officials.
  • Allow outside researchers and watchdogs access to social media data.
  • Increase transparency of internal policies, political ads and algorithms.

“The last presidential election, and the lies that continued to flourish in its wake on social media, demonstrated the dire threat that election disinformation poses to our democracy,” said Yosef Getachew, director of the media and democracy program for Common Cause. “Social media companies must learn from what was unleashed on their platforms in 2020 and helped foster the lies that led a violent, racist mob to storm the Capitol on January 6th. The companies must take concrete steps to prepare their platforms for the coming onslaught of disinformation in the midterm elections. These social media giants must implement meaningful reforms to prevent and reduce the spread of election disinformation while safeguarding our democracy and protecting public safety.”


Read More

The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less