Downey is an intern for The Fulcrum and a graduate student at Northwestern's Medill School of Journalism.
Social media platforms’ algorithms are tailored to promote content that excites. Even if the viral video of a politician yelling at a constituent was taken out of context or even artificially generated, it doesn’t matter. If it evokes an emotional response it is more likely to show up on other users’ feeds, meaning more views, more likes, more comments and more shares.
Half of the 18-29 year olds in the United States said they had “some or a lot of trust in the information they get from social media sites,” according to one 2022 Pew Research Center study. But if the information they are seeing on these platforms is inaccurate or entirely fabricated, there is the risk that young people — the biggest consumers of social media content — will fall victim to false information.
“The algorithm really does not care about what’s true or what’s helpful or what’s civically engaged. It cares about keeping you entertained and participating,” said Ethan Zuckerman, director of the Initiative for Digital Public Infrastructure at the University of Massachusetts at Amherst.
The responsibility to combat the rise of misinformation on these platforms does not fall to users or social media companies or policymakers, according to Michael Best, a professor of international affairs and interactive computing at Georgia Tech University — it has to be a group effort.
Sign up for The Fulcrum newsletter
“At this point it’s an all-hands-on-deck kind of challenge because it’s so significant and pervasive. So I would not say that one piece of the equation can fully respond to the challenge,” he said.
The first piece of the equation are the users, according to Best. They should have a personal responsibility to develop what he called their “media consumption literacy.”
“Don’t just trust ‘randos’ because they get a lot of likes or attention,” he said.
Social platforms’ business models are focused on engagement. So those “randos” who have a ton of views are often the result of the algorithm “tipping the scales toward the most exciting content,” Best added. “And exciting, again, often privileges content of concern.”
He and other experts suggest that the best way to counteract the threat of spreading or consuming misinformation is to fact-check content from unknown users or even what’s shared by the people you follow. Comparing social media content to coverage from more traditional news sources that have proven their legitimacy overtime is one of the easiest ways to do this, experts said.
“In general, it’s always good to get multiple points of view. And I think that’s true for news as it is for anything else, so I would be worried about anyone who’s just getting their news on TikTok,” Zuckerman said.
Social media consumers used to find more mainstream news on certain social media platforms.
In the past, Facebook’s referral traffic used to drive many users to news outlets’ websites, but in recent years Facebook and Meta’s other platforms like Instagram have moved away from news and politics. In a Thread last summer, Instagram head Adam Mosseri shared that the negativity associated with the news and politics “is not at all worth the scrutiny.” Just last month, Facebook removed its Facebook News tab in the United States, signifying a massive shift away from news and political content.
X, formerly known as Twitter, also changed how it shares news once Elon Musk took ownership in October 2022. Last year, he announced that X would stop displaying the headlines on links to news articles because he believed “it will greatly improve the esthetics” of tweets, but after complaints from users, headlines were restored earlier this year (although much smaller in size).
While news consumption on other social media sites has declined or stayed stagnant, the share of U.S. TikTok users who get their news on the platform has doubled since 2020. Nearly half of users said “they regularly get news there,” a Pew Research Center study released last month found.
Like consumers of any content, especially algorithmic content, users should “consider diversifying your diet so that you’re getting a wide variety of stuff,” Zuckerman said. But supplementing social media with more traditional media is only one piece of the puzzle. Social media companies themselves have a large responsibility to ensure that the information on their platforms is not spreading dis-, mis- and malinformation.
Social media companies have adopted some measures to combat misinformation. Meta partners with third-party fact-checking organizations on Facebook, Instagram and most recently Threads to review the accuracy of posts and stories. Content identified as false is labeled as misinformation and distribution is reduced. This type of fact-checking became prevalent after COVID-19 misinformation spread across the platform during the pandemic.
Like Meta, TikTok adopted a global fact-checking program to assess the accuracy of content posted to the platform. If content is flagged as harmful misinformation, TikTok will remove the video or restrict its distribution. X rebranded its old fact-checking platform, Birdwatch, to Community Notes once Musk took over in 2022. Community Notes allows users to submit helpful context to posts that could be misleading.
Social media feeds are not designed for facts and news, they “are optimized to keep you engaged,” Zuckerman said. “They’re optimized to keep you participating, keep you clicking.” The hold social media companies have on users is large, especially when algorithms are built to confirm a user’s biases and beliefs. There are a few ways platforms can better serve those young users who are heavily reliant on social media for their news though.
“You could build social media feeds that aim for diversity,” Zuckerman said. For example, he described an algorithm that would suggest a few Republicans for the user to follow if it noticed a user followed Democrats exclusively. If the algorithm saw a user following a lot of Americans, it would recommend users and accounts that provide the person with a more global view.
“People might not enjoy it as much as they enjoy their current confirmations, but you could imagine it being civically useful, you could imagine it giving you a wider view of the world. We just haven't seen much of it,” Zuckerman added.
De-ranking is another method social media companies can take against content they consider offensive, harmful or extreme. “That’s not censoring it, it’s just making it less prominent,” Best said. De-ranking moves content seen as harmful or false out of the top search results and further down in the algorithm so less people view it. This is a step shy of the more extreme option of deplatforming.
“Deplatforming is sort of the ultimate method that generally platforms would have against content they’re concerned with. That means removing those offending users,” Best said. De-ranking and De-platforming come at the risk of impeding on users’ First Amendment rights though, although the courts have come to competing conclusions so far.
The U.S. Supreme Court heard two cases on the issue earlier this year. A Texas law prevented social media companies from “censoring, banning, demonetizing or otherwise restricting” content strictly because it is a user’s opinion. The court ruled in favor of the state, determining corporations do not have a First Amendment right to censor what people say. A Florida law imposed daily penalties on social media companies that deplatformed political candidates or “journalist enterprise.” In that case, the court ruled in favor of the social media companies, which as private entities are entitled to moderate content. A ruling expected by the end of June will impact both the content users can post and what social media companies can moderate in the future.
Representatives for Meta, TikTok and X did not respond to requests for interviews.
Policymakers are the last piece of the equation, Best said. In the last year, policymakers have started to hold social media companies accountable, with Meta CEO Mark Zuckerberg testifying before Congress earlier this year and Congress passing a bill last month that bans TikTok if it’s not sold by its Chinese owner. Best suggested that policymakers have the ability to hold social media companies accountable for their business models and possibly influence them to move away from algorithms that reward content of concern.
Doing away with algorithms entirely may not change things though, Zuckerman said. “If you get rid of the algorithm altogether, you’re probably going to end up with more and more people isolating themselves because that’s what tends to happen when people have choice,” he said.
Even with all of its flaws, at its best social media serves a purpose for young people and for democracy, Zuckerman argued.
“Democracy requires media,” Zuckerman said. “We have to have the capability of talking to each other and making up our minds about who we want to represent us. So you have to be able to have some space in society where people can have those conversations.”