Skip to content
Search

Latest Stories

Top Stories

More than 100 groups demand social media platforms do more to fight election disinformation

online disinformation
tommy/Getty Images

Two days after Elon Musk said he would lift Twitter’s permanent ban on Donald Trump if his acquisition goes through, more than 100 organizations called on social media companies to combat disinformation during the 2022 midterm elections.

In a letter sent to the CEOs of the biggest social media companies on Thursday, the leaders of civil rights and democracy reform groups requested the platforms take a series of steps, including “introducing friction to reduce the spread and amplification of disinformation, consistent enforcement of robust civic integrity policies; and greater transparency into business models that allow disinformation to spread.”

The letter – whose signatories include Common Cause, Leadership Conference on Civil and Human Rights, Campaign Legal Center, the League of Women Voters and the NAACP – praised the companies for instituting plans to combat disinformation while demanding the platforms do more and do it consistently.


Meanwhile, Democratic Sen. Michael Bennet intends to offer a bill Thursday to establish a federal commission to regulate tech companies. According to The Washington Post, the bill would give the government authority to review platforms’ algorithms and to create rules governing content moderation.

“We need an agency with expertise to have a thoughtful approach here,” Bennet told the Post.

But for now, the companies receiving the letter (Facebook, Google, TikTok, Snap, YouTube, Twitter and Instagram) make their own rules. And in the wake of the Jan. 6, 2021, Capitol insurrection fueled by unfounded claims that Joe Biden stole the presidential election from Donald Trump, the groups behind the letter fear further damage to democratic institutions.

“Disinformation related to the 2020 election has not gone away but has only continued to proliferate. In fact, according to recent polls, more than 40 percent of Americans still do not believe President Biden legitimately won the 2020 presidential election. Further, fewer Americans have confidence in elections today than they did in the immediate aftermath of the January 6th insurrection,” they wrote.

The letter lays out eight steps the companies can take to stop the spread of disinformation:

  • Limit the opportunities for users to interact with election disinformation, going beyond the warning labels that have been introduced.
  • Devote more resources to blocking disinformation that targets people who do not speak English.
  • Consistently enforce “civic integrity policies” during election and non-election years.
  • Apply those policies to live content.
  • Prioritize efforts to stop the spread of unfounded voter fraud claims, known as the “Big Lie.”
  • Increase fact-checking of election content, including political advertisements and statements from public officials.
  • Allow outside researchers and watchdogs access to social media data.
  • Increase transparency of internal policies, political ads and algorithms.

“The last presidential election, and the lies that continued to flourish in its wake on social media, demonstrated the dire threat that election disinformation poses to our democracy,” said Yosef Getachew, director of the media and democracy program for Common Cause. “Social media companies must learn from what was unleashed on their platforms in 2020 and helped foster the lies that led a violent, racist mob to storm the Capitol on January 6th. The companies must take concrete steps to prepare their platforms for the coming onslaught of disinformation in the midterm elections. These social media giants must implement meaningful reforms to prevent and reduce the spread of election disinformation while safeguarding our democracy and protecting public safety.”

Read More

MQ-9 Predator Drones Hunt Migrants at the Border
Way into future, RPA Airmen participate in Red Flag 16-2 > Creech ...

MQ-9 Predator Drones Hunt Migrants at the Border

FT HUACHUCA, Ariz. - Inside a windowless and dark shipping container turned into a high-tech surveillance command center, two analysts peered at their own set of six screens that showed data coming in from an MQ-9 Predator B drone. Both were looking for two adults and a child who had crossed the U.S.-Mexico border and had fled when a Border Patrol agent approached in a truck.

Inside the drone hangar on the other side of the Fort Huachuca base sat another former shipping container, this one occupied by a drone pilot and a camera operator who pivoted the drone's camera to scan nine square miles of shrubs and saguaros for the migrants. Like the command center, the onetime shipping container was dark, lit only by the glow of the computer screens.

Keep ReadingShow less
A child holding a smartphone.

As children scroll through endless violence on their screens, experts warn of a mental health crisis fueled by trauma, desensitization, and the erosion of empathy.

Trauma Through Screens: Are We Failing the Children?

The first time I watched the video of George Floyd’s final moments as he gasped for air, recorded on a smartphone for the world to witness, it was May 2020, and it was gut-wrenching to see a man’s life end in such a horrific way with just a click.

That single video, captured by a bystander, spread across over 1.3 billion screens and sent a shockwave throughout the country. It forced people to confront the brutality of racial injustice in a way that could not be ignored, filtered, or explained away.

Keep ReadingShow less
A person on their phone, using a type of artificial intelligence.

AI is transforming the workplace faster than ever. Experts warn that automation could reshape jobs, wages, and opportunities for millions of American workers.

Getty Images, d3sign

AI Reshapes the American Workplace—But Where Are the Jobs?

In recent years, American workers have been going through an unprecedented experiment in how we work. During the COVID pandemic and social distancing, U.S. businesses embraced the latest online technologies to vastly expand remote work. That, in turn, ushered in the slow creep of artificial intelligence (AI) applications into every crack and seam of society, including in the workplace.

If 2023 was about increasing adoption of AI coming out of the pandemic, experts are saying 2025-26 will be when companies implement deeper changes in the workplace based on ever more pervasive AI.

Keep ReadingShow less
A child looking at a cellphone at night.

AI is changing childhood. Kevin Frazier explains why it's critical for parents and mentors to start having the “AI talk” and teach kids safe, responsible AI use.

Getty Images, Elva Etienne

The New Talk: The Need To Discuss AI With Kids

“[I]t is a massively more powerful and scary thing than I knew about.” That’s how Adam Raine’s dad characterized ChatGPT when he reviewed his son’s conversations with the AI tool. Adam tragically died by suicide. His parents are now suing OpenAI and Sam Altman, the company’s CEO, based on allegations that the tool contributed to his death.

This tragic story has rightfully caused a push for tech companies to institute changes and for lawmakers to institute sweeping regulations. While both of those strategies have some merit, computer code and AI-related laws will not address the underlying issue: our kids need guidance from their parents, educators, and mentors about how and when to use AI.

Keep ReadingShow less