Skip to content
Search

Latest Stories

Top Stories

TikTok has become a hotbed of misinformation

TikTok
Smith Collection/Gado/Getty Images

In the last election cycle, Facebook and Twitter came under heavy criticism because they were used to spread misinformation and disinformation. But as those platforms have matured and others have surged to the forefront, researchers are now examining the negative influence of the newer players. Like TikTok.

The platform, which allows users to create and share short videos, has become tremendously popular, particularly among teens and young adults. It was the second most downloaded app during the first quarter of 2022, according to Forbes, and it has become the second most popular social media platform among teens this year, per the Pew Research Center.


And because TikTok is also eating into a big chunk of Google’s search dominance, it has become a significant source of misinformation.

Earlier this month, researchers at NewsGuard sampled TikTok search results on a variety of topics, covering the 2020 presidential election, the midterm elections, Covid-19, abortion and school shootings. They found that nearly 20 percent of the results demonstrated misinformation.

Emphasis theirs:

For example, the first result in a search for the phrase “Was the 2020 election stolen?” was a July 2022 video with the text “The Election Was Stolen!” The narrator stated that the “2020 election was overturned. President Trump should get the next two years and he should also be able to run for the next four years. Since he won the election, he deserves it.” (Election officials in all 50 states have affirmed the integrity of the election, and top officials in the Trump administration have dismissed claims of widespread fraud.)

Of the first 20 videos in the search results, six contained misinformation (if not disinformation), including one that used a QAnon hashtag. The same search on Google did not result in web pages promoting misinformation.

Similarly, a search for “January 6 FBI” on TikTok returned eight videos containing misinformation among the top 20, including the top result. Again, Google did not have any misinformation in the top 20.

While Google will search the entire internet – from government websites to news to videos to recipes – a TikTok search will only return videos uploaded to the platform by its users.

TikTok does have a content moderation system and states in its guidelines that misinformation is not accepted. But users appear to have found ways around the AI system that serves as the first line of defense against misinformation.

“There is endless variety, and efforts to evade content moderation (as indicated in [NewsGuard’s] report) will always stay several steps ahead of the efforts by the platform,” said Cameron Hickey, project director for algorithmic transparency at the National Conference on Citizenship, when asked whether there is anything the platforms can do to prevent misinformation from surfacing in search results. “That doesn’t mean the answer is always no, but it means that concrete investment in both understanding what misinformation is out there, how people talk about it, and effectively judging both the validity and danger are a significant undertaking.”

While advocates encourage social media platforms to step up their anti-misiniformation efforts, there are other steps that can be taken at the user end, particularly by stepping up education about identifying falsehoods.

“Users on social media need greater media literacy skills in general, but a key focus should be on understanding why messages stick,” said Hickey.

He pointed to three reasons people latch onto misinformation:

  • Motivated reasoning: People want to find contentpeople statement that aligns with their beliefs and values.
  • Emotional appeals: Media consumers need to pause when they have an emotional response to some information and evaluate the cause of the reaction.
  • Easy answers: Be wary of any information that seems too good to be true.

Read More

Entertainment Can Improve How Democrats and Republicans See Each Other

Since the development of American mass media culture in the mid-20th century, numerous examples of entertainment media have tried to improve attitudes towards those who have traditionally held little power.

Getty Images, skynesher

Entertainment Can Improve How Democrats and Republicans See Each Other

Entertainment has been used for decades to improve attitudes toward other groups, both in the U.S. and abroad. One can think of movies like Guess Who's Coming to Dinner, helping change attitudes toward Black Americans, or TV shows like Rosanne, helping humanize the White working class. Efforts internationally show that media can sometimes improve attitudes toward two groups concurrently.

Substantial research shows that Americans now hold overly negative views of those across the political spectrum. Let's now learn from decades of experience using entertainment to improve attitudes of those in other groups—but also from counter-examples that have reinforced stereotypes and whose techniques should generally be avoided—in order to improve attitudes toward fellow Americans across politics. This entertainment can allow Americans across the political spectrum to have more accurate views of each other while realizing that successful cross-ideological friendships and collaborations are possible.

Keep ReadingShow less
Congress Must Not Undermine State Efforts To Regulate AI Harms to Children
Congress Must Not Undermine State Efforts To Regulate AI Harms to Children
Getty Images, Dmytro Betsenko

Congress Must Not Undermine State Efforts To Regulate AI Harms to Children

A cornerstone of conservative philosophy is that policy decisions should generally be left to the states. Apparently, this does not apply when the topic is artificial intelligence (AI).

In the name of promoting innovation, and at the urging of the tech industry, Congress quietly included in a 1,000-page bill a single sentence that has the power to undermine efforts to protect against the dangers of unfettered AI development. The sentence imposes a ten-year ban on state regulation of AI, including prohibiting the enforcement of laws already on the books. This brazen approach crossed the line even for conservative U.S. Representative Marjorie Taylor Greene, who remarked, “We have no idea what AI will be capable of in the next 10 years, and giving it free rein and tying states' hands is potentially dangerous.” She’s right. And it is especially dangerous for children.

Keep ReadingShow less
Microphones, podcast set up, podcast studio.

Many people inside and outside of the podcasting world are working to use the medium as a way to promote democracy and civic engagement.

Getty Images, Sergey Mironov

Ben Rhodes on How Podcasts Can Strengthen Democracy

After the 2024 election was deemed the “podcast election,” many people inside and outside of the podcasting world were left wondering how to capitalize on the medium as a way to promote democracy and civic engagement to audiences who are either burned out by or distrustful of traditional or mainstream news sources.

The Democracy Group podcast network has been working through this question since its founding in 2020—long before presidential candidates appeared on some of the most popular podcasts to appeal to specific demographics. Our members recently met in Washington, D.C., for our first convening to learn from each other and from high-profile podcasters like Jessica Tarlov, host of Raging Moderates, and Ben Rhodes, host of Pod Save the World.

Keep ReadingShow less
True Confessions of an AI Flip Flopper
Ai technology, Artificial Intelligence. man using technology smart robot AI, artificial intelligence by enter command prompt for generates something, Futuristic technology transformation.
Getty Images - stock photo

True Confessions of an AI Flip Flopper

A few years ago, I would have agreed with the argument that the most important AI regulatory issue is mitigating the low probability of catastrophic risks. Today, I’d think nearly the opposite. My primary concern is that we will fail to realize the already feasible and significant benefits of AI. What changed and why do I think my own evolution matters?

Discussion of my personal path from a more “safety” oriented perspective to one that some would label as an “accelerationist” view isn’t important because I, Kevin Frazier, have altered my views. The point of walking through my pivot is instead valuable because it may help those unsure of how to think about these critical issues navigate a complex and, increasingly, heated debate. By sharing my own change in thought, I hope others will feel welcomed to do two things: first, reject unproductive, static labels that are misaligned with a dynamic technology; and, second, adjust their own views in light of the wide variety of shifting variables at play when it comes to AI regulation. More generally, I believe that calling myself out for a so-called “flip-flop” may give others more leeway to do so without feeling like they’ve committed some wrong.

Keep ReadingShow less