Skip to content
Search

Latest Stories

Top Stories

How can social media better inform young users?

Social media apps on a phone
Jonathan Raa/NurPhoto via Getty Images

Downey is an intern for The Fulcrum and a graduate student at Northwestern's Medill School of Journalism.

Social media platforms’ algorithms are tailored to promote content that excites. Even if the viral video of a politician yelling at a constituent was taken out of context or even artificially generated, it doesn’t matter. If it evokes an emotional response it is more likely to show up on other users’ feeds, meaning more views, more likes, more comments and more shares.

Half of the 18-29 year olds in the United States said they had “some or a lot of trust in the information they get from social media sites,” according to one 2022 Pew Research Center study. But if the information they are seeing on these platforms is inaccurate or entirely fabricated, there is the risk that young people — the biggest consumers of social media content — will fall victim to false information.


“The algorithm really does not care about what’s true or what’s helpful or what’s civically engaged. It cares about keeping you entertained and participating,” said Ethan Zuckerman, director of the Initiative for Digital Public Infrastructure at the University of Massachusetts at Amherst.

The responsibility to combat the rise of misinformation on these platforms does not fall to users or social media companies or policymakers, according to Michael Best, a professor of international affairs and interactive computing at Georgia Tech University — it has to be a group effort.

Sign up for The Fulcrum newsletter

“At this point it’s an all-hands-on-deck kind of challenge because it’s so significant and pervasive. So I would not say that one piece of the equation can fully respond to the challenge,” he said.

The first piece of the equation are the users, according to Best. They should have a personal responsibility to develop what he called their “media consumption literacy.”

“Don’t just trust ‘randos’ because they get a lot of likes or attention,” he said.

Social platforms’ business models are focused on engagement. So those “randos” who have a ton of views are often the result of the algorithm “tipping the scales toward the most exciting content,” Best added. “And exciting, again, often privileges content of concern.”

He and other experts suggest that the best way to counteract the threat of spreading or consuming misinformation is to fact-check content from unknown users or even what’s shared by the people you follow. Comparing social media content to coverage from more traditional news sources that have proven their legitimacy overtime is one of the easiest ways to do this, experts said.

“In general, it’s always good to get multiple points of view. And I think that’s true for news as it is for anything else, so I would be worried about anyone who’s just getting their news on TikTok,” Zuckerman said.

Social media consumers used to find more mainstream news on certain social media platforms.

In the past, Facebook’s referral traffic used to drive many users to news outlets’ websites, but in recent years Facebook and Meta’s other platforms like Instagram have moved away from news and politics. In a Thread last summer, Instagram head Adam Mosseri shared that the negativity associated with the news and politics “is not at all worth the scrutiny.” Just last month, Facebook removed its Facebook News tab in the United States, signifying a massive shift away from news and political content.

X, formerly known as Twitter, also changed how it shares news once Elon Musk took ownership in October 2022. Last year, he announced that X would stop displaying the headlines on links to news articles because he believed “it will greatly improve the esthetics” of tweets, but after complaints from users, headlines were restored earlier this year (although much smaller in size).

While news consumption on other social media sites has declined or stayed stagnant, the share of U.S. TikTok users who get their news on the platform has doubled since 2020. Nearly half of users said “they regularly get news there,” a Pew Research Center study released last month found.

Like consumers of any content, especially algorithmic content, users should “consider diversifying your diet so that you’re getting a wide variety of stuff,” Zuckerman said. But supplementing social media with more traditional media is only one piece of the puzzle. Social media companies themselves have a large responsibility to ensure that the information on their platforms is not spreading dis-, mis- and malinformation.

Social media companies have adopted some measures to combat misinformation. Meta partners with third-party fact-checking organizations on Facebook, Instagram and most recently Threads to review the accuracy of posts and stories. Content identified as false is labeled as misinformation and distribution is reduced. This type of fact-checking became prevalent after COVID-19 misinformation spread across the platform during the pandemic.

Like Meta, TikTok adopted a global fact-checking program to assess the accuracy of content posted to the platform. If content is flagged as harmful misinformation, TikTok will remove the video or restrict its distribution. X rebranded its old fact-checking platform, Birdwatch, to Community Notes once Musk took over in 2022. Community Notes allows users to submit helpful context to posts that could be misleading.

Social media feeds are not designed for facts and news, they “are optimized to keep you engaged,” Zuckerman said. “They’re optimized to keep you participating, keep you clicking.” The hold social media companies have on users is large, especially when algorithms are built to confirm a user’s biases and beliefs. There are a few ways platforms can better serve those young users who are heavily reliant on social media for their news though.

“You could build social media feeds that aim for diversity,” Zuckerman said. For example, he described an algorithm that would suggest a few Republicans for the user to follow if it noticed a user followed Democrats exclusively. If the algorithm saw a user following a lot of Americans, it would recommend users and accounts that provide the person with a more global view.

“People might not enjoy it as much as they enjoy their current confirmations, but you could imagine it being civically useful, you could imagine it giving you a wider view of the world. We just haven't seen much of it,” Zuckerman added.

De-ranking is another method social media companies can take against content they consider offensive, harmful or extreme. “That’s not censoring it, it’s just making it less prominent,” Best said. De-ranking moves content seen as harmful or false out of the top search results and further down in the algorithm so less people view it. This is a step shy of the more extreme option of deplatforming.

“Deplatforming is sort of the ultimate method that generally platforms would have against content they’re concerned with. That means removing those offending users,” Best said. De-ranking and De-platforming come at the risk of impeding on users’ First Amendment rights though, although the courts have come to competing conclusions so far.

The U.S. Supreme Court heard two cases on the issue earlier this year. A Texas law prevented social media companies from “censoring, banning, demonetizing or otherwise restricting” content strictly because it is a user’s opinion. The court ruled in favor of the state, determining corporations do not have a First Amendment right to censor what people say. A Florida law imposed daily penalties on social media companies that deplatformed political candidates or “journalist enterprise.” In that case, the court ruled in favor of the social media companies, which as private entities are entitled to moderate content. A ruling expected by the end of June will impact both the content users can post and what social media companies can moderate in the future.

Representatives for Meta, TikTok and X did not respond to requests for interviews.

Policymakers are the last piece of the equation, Best said. In the last year, policymakers have started to hold social media companies accountable, with Meta CEO Mark Zuckerberg testifying before Congress earlier this year and Congress passing a bill last month that bans TikTok if it’s not sold by its Chinese owner. Best suggested that policymakers have the ability to hold social media companies accountable for their business models and possibly influence them to move away from algorithms that reward content of concern.

Doing away with algorithms entirely may not change things though, Zuckerman said. “If you get rid of the algorithm altogether, you’re probably going to end up with more and more people isolating themselves because that’s what tends to happen when people have choice,” he said.

Even with all of its flaws, at its best social media serves a purpose for young people and for democracy, Zuckerman argued.

“Democracy requires media,” Zuckerman said. “We have to have the capability of talking to each other and making up our minds about who we want to represent us. So you have to be able to have some space in society where people can have those conversations.”

Read More

An AI Spark Worth Spreading

People working with AI technology.

Getty Images, Maskot

An AI Spark Worth Spreading

In the rapidly evolving landscape of artificial intelligence, policymakers face a delicate balancing act: fostering innovation while addressing legitimate concerns about AI's potential impacts. Representative Michael Keaton’s proposed HB 1833, also known as the Spark Act, represents a refreshing approach to this challenge—one that Washington legislators would be right to pass and other states would be wise to consider.

As the AI Innovation and Law Fellow at the University of Texas at Austin School of Law, I find the Spark Act particularly promising. By establishing a grant program through the Department of Commerce to promote innovative uses of AI, Washington's legislators have a chance to act on a fundamental truth: technological diffusion is essential to a dynamic economy, widespread access to opportunity, and the inspiration of future innovation.

Keep ReadingShow less
Trump’s Gambit: Trade Tariff Relief For a TikTok Sale

TikTok icon on a phone.

Getty Images, 5./15 WEST

Trump’s Gambit: Trade Tariff Relief For a TikTok Sale

You know things aren’t going well in the negotiations for the U.S. operations of TikTok when President Trump has to bribe the Chinese government with billions in tariff relief.

But that’s exactly what was reported out of the White House. President Trump is willing to give the Chinese Communist Party (CCP) billions in tariff relief if they pressured TikTok to sell its U.S. operations before the April 5th deadline.

Keep ReadingShow less
Who gets to ask questions at the White House?

WASHINGTON, DC, USA –– White House Press Secretary Karoline Leavitt answers questions from journalists on Jan. 28, 2025.

(Joshua Sukoff/Medill News Service)

Who gets to ask questions at the White House?

WASHINGTON — As the Trump administration increasingly welcomes vloggers and social media influencers into press briefings and the Oval Office, established outlets like the Associated Press find themselves excluded from the century-old press pool, sparking controversy about what "transparency" truly means.

Watch the video report here:

Keep ReadingShow less
Lost Sams and Missing Fei-Feis: Why America Needs AI Guides Now

Students studying robotics.

Getty Images, eyesfoto

Lost Sams and Missing Fei-Feis: Why America Needs AI Guides Now

In 2018, Economist Raj Chetty and his colleagues revealed a sobering truth: talent is everywhere, but opportunity is not. Their research on "Lost Einsteins" demonstrated that countless young Americans with the potential to be great inventors never get the chance to develop their skills simply because they lack exposure to innovation and mentorship. The data was clear: if a child grows up in an area with a high concentration of inventors, they are far more likely to become one themselves. But for too many, particularly those in rural and lower-income communities, the door to innovation remains closed. Failure to find those “Lost Einsteins” has deprived us all of a better future. Chetty forecasted that "if women, minorities, and children from low-income families were to invent at the same rate as white men from high-income (top 20%) families, the rate of innovation in America would quadruple." That’s a more prosperous, dynamic America.

The introduction of artificial intelligence (AI) carries the promise of realizing that brighter future if we learn from our prior mistakes. A lack of broad exposure among our youth to AI and the individuals shaping its development threatens to leave behind an entire generation of would-be entrepreneurs, scholars, and thought leaders. We risk creating "Lost Sams"—referring to OpenAI's Sam Altman as a stand-in for AI innovators—and "Missing Fei-Feis"—a nod to Stanford AI researcher Fei-Fei Li. Without urgent action, we will reinforce the existing gaps in AI leadership, limiting who gets to shape the future of this transformative technology.

Keep ReadingShow less