Skip to content
Search

Latest Stories

Follow Us:
Top Stories

TikTok has become a hotbed of misinformation

TikTok
Smith Collection/Gado/Getty Images

In the last election cycle, Facebook and Twitter came under heavy criticism because they were used to spread misinformation and disinformation. But as those platforms have matured and others have surged to the forefront, researchers are now examining the negative influence of the newer players. Like TikTok.

The platform, which allows users to create and share short videos, has become tremendously popular, particularly among teens and young adults. It was the second most downloaded app during the first quarter of 2022, according to Forbes, and it has become the second most popular social media platform among teens this year, per the Pew Research Center.


And because TikTok is also eating into a big chunk of Google’s search dominance, it has become a significant source of misinformation.

Earlier this month, researchers at NewsGuard sampled TikTok search results on a variety of topics, covering the 2020 presidential election, the midterm elections, Covid-19, abortion and school shootings. They found that nearly 20 percent of the results demonstrated misinformation.

Emphasis theirs:

For example, the first result in a search for the phrase “Was the 2020 election stolen?” was a July 2022 video with the text “The Election Was Stolen!” The narrator stated that the “2020 election was overturned. President Trump should get the next two years and he should also be able to run for the next four years. Since he won the election, he deserves it.” (Election officials in all 50 states have affirmed the integrity of the election, and top officials in the Trump administration have dismissed claims of widespread fraud.)

Of the first 20 videos in the search results, six contained misinformation (if not disinformation), including one that used a QAnon hashtag. The same search on Google did not result in web pages promoting misinformation.

Similarly, a search for “January 6 FBI” on TikTok returned eight videos containing misinformation among the top 20, including the top result. Again, Google did not have any misinformation in the top 20.

While Google will search the entire internet – from government websites to news to videos to recipes – a TikTok search will only return videos uploaded to the platform by its users.

TikTok does have a content moderation system and states in its guidelines that misinformation is not accepted. But users appear to have found ways around the AI system that serves as the first line of defense against misinformation.

“There is endless variety, and efforts to evade content moderation (as indicated in [NewsGuard’s] report) will always stay several steps ahead of the efforts by the platform,” said Cameron Hickey, project director for algorithmic transparency at the National Conference on Citizenship, when asked whether there is anything the platforms can do to prevent misinformation from surfacing in search results. “That doesn’t mean the answer is always no, but it means that concrete investment in both understanding what misinformation is out there, how people talk about it, and effectively judging both the validity and danger are a significant undertaking.”

While advocates encourage social media platforms to step up their anti-misiniformation efforts, there are other steps that can be taken at the user end, particularly by stepping up education about identifying falsehoods.

“Users on social media need greater media literacy skills in general, but a key focus should be on understanding why messages stick,” said Hickey.

He pointed to three reasons people latch onto misinformation:

  • Motivated reasoning: People want to find contentpeople statement that aligns with their beliefs and values.
  • Emotional appeals: Media consumers need to pause when they have an emotional response to some information and evaluate the cause of the reaction.
  • Easy answers: Be wary of any information that seems too good to be true.

Read More

Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less
Trump Administration’s Escalating Attacks on Media Raise Concerns about Trust in Media, Self-Censorship

U.S. President Donald Trump speaks to reporters before boarding Air Force One at Palm Beach International Airport on March 23, 2026 in West Palm Beach, Florida.

(Photo by Roberto Schmidt/Getty Images)

Trump Administration’s Escalating Attacks on Media Raise Concerns about Trust in Media, Self-Censorship

WASHINGTON – Independent journalist Georgia Fort filmed federal agents outside of her home on Jan. 30. They were coming to arrest her in connection with reporting and filming at an anti-ICE protest in Minneapolis, Minn., almost two weeks prior.

“I don’t feel like I have my First Amendment right as a member of the press,” said Fort in video footage shared with CNN.

Keep ReadingShow less
AI - Its Use, Misuse, and Regulation
Glowing ai chip on a circuit board.
Photo by Immo Wegmann on Unsplash

AI - Its Use, Misuse, and Regulation

There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.

First, the good uses. There is no question that AI can outthink human beings, regardless of how famous or knowledgeable, because of the amount of information it can process in a short amount of time. The most powerful accounts I've read have been in the field of medical research: doctors have fed facts into AI, asking for a diagnosis or a possible remedy, and AI has come up with remarkable answers beyond the human mind's capability.

Keep ReadingShow less
Overbroad AI Export Controls Risk Forfeiting the AI Race
a black keyboard with a blue button on it

Overbroad AI Export Controls Risk Forfeiting the AI Race

The nation that wins the global AI race will hold decisive military and economic advantages. That’s why President Trump’s January 2025 AI Action Plan declared: “It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

However, AI global dominance does not just mean producing the best AI systems. It also means that the American “AI Stack” – the layered collection of tools, technologies, and frameworks that organizations use to build, train, deploy, and manage artificial intelligence applications – will become the international standard for this world-changing technology. As such, advancing a commonsense export policy for American AI chips will play a decisive role in determining whether the United States remains embedded at the core of global AI development or is gradually displaced by rival systems.

Keep ReadingShow less