Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Report suggests plan for limiting election disinformation

Cartoon of people trying to shut out noise
Rudzhan Nagiev/Getty Images

Eight months after Inauguration Day, one-third of Americans told pollsters they still believed Donald Trump actually won the election and that Joe Biden stole it away from the incumbent. A new report offers a mix of government and corporate reforms to limit the spread and influence of such election disinformation.

The Common Cause Education Fund, an affiliate of the democracy reform advocacy group Common Cause, issued a report in late October reviewing the state of disinformation campaigns and a series of recommendations designed to stem the tide.

"Just as we came together last year, rising up to vote safely and securely in record numbers during a global pandemic, we must now rise up to stop election disinformation efforts in future elections," the researchers wrote.


The report groups its 14 recommendations in three categories: statutory reforms, executive and regulatory agency reforms, and corporate policy reforms for social media businesses.

"There is no single policy solution to the problem of election disinformation," according to the report. "We need strong voting rights laws, strong campaign finance laws, strong communications and privacy laws, strong media literacy laws, and strong corporate civic integrity policies."

While many of the solutions require some mix of legislative activity, increased civic education and media literacy, and grassroots advocacy, others are easier to achieve — particularly self-imposed corporate reforms, said Jesse Littlewood, vice president of campaigns for Common Cause. For example, he suggested it would not be complicated for social media platforms to consistently enforce their own standards.

"They're not taking any action on disinformation in the 2020 campaign," said Littlewood, referring to ongoing claims that the election was stolen, claims that would have been addressed last year. "When you let your enforcement lax, it allows false narratives to grow."

Littlewood also identified the need to spend more time on civic integrity.

"We learned a lot through the Facebook Papers about the disbanding of the civic team right after the 2020 election, and the historic underinvestment in content moderation particularly in civic integrity issues," he said.

The statutory recommendations focus on five areas:

  • Voter intimidation and false election speech, including state and federal legislation prohibiting the spread of election disinformation.
  • Campaign finance reforms, such as passing federal and state disclosure laws to expose "dark money" and strengthening the Federal Elections Commission.
  • Passing media literacy legislation at the state level.
  • Enacting state privacy laws that include civil rights protections.
  • Approving federal legislation to curb some online business practices, such as banning discriminatory algorithms, limiting and protecting the data collected online, and supporting local and watchdog journalism.

Some aspects of these proposals already exist in federal legislation that has stalled in Congress.

The regulatory recommendations fall into four buckets:

  • Demonstrating state and federal leadership through executive action to stop the spread of election disinformation.
  • Stepping up enforcement of state and federal laws that ban voter intimidation and other election interference efforts.
  • Empowering the Federal Trade Commission to step up its privacy protection work.
  • Use the FEC and state agencies to update and enforce disclosure requirements and rules against disinformation.

Finally, Common Cause suggests five areas of improvement for social media corporations:

  • Directing users to official state and local sources of information about voting and elections.
  • Maintaining and improving their self-imposed disinformation rules, throughout election and non-election years.
  • Developing technology, such as artificial intelligence and algorithms that limit the spread of disinformation.
  • Granting journalists and researchers more access to social media data.
  • Increasing investment in efforts to stop non-English disinformation.

Littlewood said access to the data is one of the most important recommendations, as it influences the potential to achieve others.

"It's going to be hard to make progress from a regulatory process if there isn't transparency," said Littlewood. "We're trending in the other direction now, which is really problematic.

"Without access to the data, it's very hard to understand what's happening. It's very difficult to come up with recommendations that balance the private interests of the platform and the public interest. That's got to be our starting point."

Read the full report.


Read More

Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less
AI - Its Use, Misuse, and Regulation
Glowing ai chip on a circuit board.
Photo by Immo Wegmann on Unsplash

AI - Its Use, Misuse, and Regulation

There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.

First, the good uses. There is no question that AI can outthink human beings, regardless of how famous or knowledgeable, because of the amount of information it can process in a short amount of time. The most powerful accounts I've read have been in the field of medical research: doctors have fed facts into AI, asking for a diagnosis or a possible remedy, and AI has come up with remarkable answers beyond the human mind's capability.

Keep ReadingShow less