Skip to content
Search

Latest Stories

Follow Us:
Top Stories

TikTok has become a hotbed of misinformation

TikTok
Smith Collection/Gado/Getty Images

In the last election cycle, Facebook and Twitter came under heavy criticism because they were used to spread misinformation and disinformation. But as those platforms have matured and others have surged to the forefront, researchers are now examining the negative influence of the newer players. Like TikTok.

The platform, which allows users to create and share short videos, has become tremendously popular, particularly among teens and young adults. It was the second most downloaded app during the first quarter of 2022, according to Forbes, and it has become the second most popular social media platform among teens this year, per the Pew Research Center.


And because TikTok is also eating into a big chunk of Google’s search dominance, it has become a significant source of misinformation.

Earlier this month, researchers at NewsGuard sampled TikTok search results on a variety of topics, covering the 2020 presidential election, the midterm elections, Covid-19, abortion and school shootings. They found that nearly 20 percent of the results demonstrated misinformation.

Emphasis theirs:

For example, the first result in a search for the phrase “Was the 2020 election stolen?” was a July 2022 video with the text “The Election Was Stolen!” The narrator stated that the “2020 election was overturned. President Trump should get the next two years and he should also be able to run for the next four years. Since he won the election, he deserves it.” (Election officials in all 50 states have affirmed the integrity of the election, and top officials in the Trump administration have dismissed claims of widespread fraud.)

Of the first 20 videos in the search results, six contained misinformation (if not disinformation), including one that used a QAnon hashtag. The same search on Google did not result in web pages promoting misinformation.

Similarly, a search for “January 6 FBI” on TikTok returned eight videos containing misinformation among the top 20, including the top result. Again, Google did not have any misinformation in the top 20.

While Google will search the entire internet – from government websites to news to videos to recipes – a TikTok search will only return videos uploaded to the platform by its users.

TikTok does have a content moderation system and states in its guidelines that misinformation is not accepted. But users appear to have found ways around the AI system that serves as the first line of defense against misinformation.

“There is endless variety, and efforts to evade content moderation (as indicated in [NewsGuard’s] report) will always stay several steps ahead of the efforts by the platform,” said Cameron Hickey, project director for algorithmic transparency at the National Conference on Citizenship, when asked whether there is anything the platforms can do to prevent misinformation from surfacing in search results. “That doesn’t mean the answer is always no, but it means that concrete investment in both understanding what misinformation is out there, how people talk about it, and effectively judging both the validity and danger are a significant undertaking.”

While advocates encourage social media platforms to step up their anti-misiniformation efforts, there are other steps that can be taken at the user end, particularly by stepping up education about identifying falsehoods.

“Users on social media need greater media literacy skills in general, but a key focus should be on understanding why messages stick,” said Hickey.

He pointed to three reasons people latch onto misinformation:

  • Motivated reasoning: People want to find contentpeople statement that aligns with their beliefs and values.
  • Emotional appeals: Media consumers need to pause when they have an emotional response to some information and evaluate the cause of the reaction.
  • Easy answers: Be wary of any information that seems too good to be true.

Read More

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less
Meta Undermining Trust but Verify through Paid Links
Facebook launches voting resource tool
Facebook launches voting resource tool

Meta Undermining Trust but Verify through Paid Links

Facebook is testing limits on shared external links, which would become a paid feature through their Meta Verified program, which costs $14.99 per month.

This change solidifies that verification badges are now meaningless signifiers. Yet it wasn’t always so; the verified internet was built to support participation and trust. Beginning with Twitter’s verification program launched in 2009, a checkmark next to a username indicated that an account had been verified to represent a notable person or official account for a business. We could believe that an elected official or a brand name was who they said they were online. When Twitter Blue, and later X Premium, began to support paid blue checkmarks in November of 2022, the visual identification of verification became deceptive. Think Fake Eli Lilly accounts posting about free insulin and impersonation accounts for Elon Musk himself.

This week’s move by Meta echoes changes at Twitter/X, despite the significant evidence that it leaves information quality and user experience in a worse place than before. Despite what Facebook says, all this tells anyone is that you paid.

Keep ReadingShow less
artificial intelligence

Rather than blame AI for young Americans struggling to find work, we need to build: build new educational institutions, new retraining and upskilling programs, and, most importantly, new firms.

Surasak Suwanmake/Getty Images

Blame AI or Build With AI? Only One Approach Creates Jobs

We’re failing young Americans. Many of them are struggling to find work. Unemployment among 16- to 24-year-olds topped 10.5% in August. Even among those who do find a job, many of them are settling for lower-paying roles. More than 50% of college grads are underemployed. To make matters worse, the path forward to a more stable, lucrative career is seemingly up in the air. High school grads in their twenties find jobs at nearly the same rate as those with four-year degrees.

We have two options: blame or build. The first involves blaming AI, as if this new technology is entirely to blame for the current economic malaise facing Gen Z. This course of action involves slowing or even stopping AI adoption. For example, there’s so-called robot taxes. The thinking goes that by placing financial penalties on firms that lean into AI, there will be more roles left to Gen Z and workers in general. Then there’s the idea of banning or limiting the use of AI in hiring and firing decisions. Applicants who have struggled to find work suggest that increased use of AI may be partially at fault. Others have called for providing workers with a greater say in whether and to what extent their firm uses AI. This may help firms find ways to integrate AI in a way that augments workers rather than replace them.

Keep ReadingShow less