Skip to content
Search

Latest Stories

Top Stories

How can social media better inform young users?

Social media apps on a phone
Jonathan Raa/NurPhoto via Getty Images

Downey is an intern for The Fulcrum and a graduate student at Northwestern's Medill School of Journalism.

Social media platforms’ algorithms are tailored to promote content that excites. Even if the viral video of a politician yelling at a constituent was taken out of context or even artificially generated, it doesn’t matter. If it evokes an emotional response it is more likely to show up on other users’ feeds, meaning more views, more likes, more comments and more shares.

Half of the 18-29 year olds in the United States said they had “some or a lot of trust in the information they get from social media sites,” according to one 2022 Pew Research Center study. But if the information they are seeing on these platforms is inaccurate or entirely fabricated, there is the risk that young people — the biggest consumers of social media content — will fall victim to false information.


“The algorithm really does not care about what’s true or what’s helpful or what’s civically engaged. It cares about keeping you entertained and participating,” said Ethan Zuckerman, director of the Initiative for Digital Public Infrastructure at the University of Massachusetts at Amherst.

The responsibility to combat the rise of misinformation on these platforms does not fall to users or social media companies or policymakers, according to Michael Best, a professor of international affairs and interactive computing at Georgia Tech University — it has to be a group effort.

Sign up for The Fulcrum newsletter

“At this point it’s an all-hands-on-deck kind of challenge because it’s so significant and pervasive. So I would not say that one piece of the equation can fully respond to the challenge,” he said.

The first piece of the equation are the users, according to Best. They should have a personal responsibility to develop what he called their “media consumption literacy.”

“Don’t just trust ‘randos’ because they get a lot of likes or attention,” he said.

Social platforms’ business models are focused on engagement. So those “randos” who have a ton of views are often the result of the algorithm “tipping the scales toward the most exciting content,” Best added. “And exciting, again, often privileges content of concern.”

He and other experts suggest that the best way to counteract the threat of spreading or consuming misinformation is to fact-check content from unknown users or even what’s shared by the people you follow. Comparing social media content to coverage from more traditional news sources that have proven their legitimacy overtime is one of the easiest ways to do this, experts said.

“In general, it’s always good to get multiple points of view. And I think that’s true for news as it is for anything else, so I would be worried about anyone who’s just getting their news on TikTok,” Zuckerman said.

Social media consumers used to find more mainstream news on certain social media platforms.

In the past, Facebook’s referral traffic used to drive many users to news outlets’ websites, but in recent years Facebook and Meta’s other platforms like Instagram have moved away from news and politics. In a Thread last summer, Instagram head Adam Mosseri shared that the negativity associated with the news and politics “is not at all worth the scrutiny.” Just last month, Facebook removed its Facebook News tab in the United States, signifying a massive shift away from news and political content.

X, formerly known as Twitter, also changed how it shares news once Elon Musk took ownership in October 2022. Last year, he announced that X would stop displaying the headlines on links to news articles because he believed “it will greatly improve the esthetics” of tweets, but after complaints from users, headlines were restored earlier this year (although much smaller in size).

While news consumption on other social media sites has declined or stayed stagnant, the share of U.S. TikTok users who get their news on the platform has doubled since 2020. Nearly half of users said “they regularly get news there,” a Pew Research Center study released last month found.

Like consumers of any content, especially algorithmic content, users should “consider diversifying your diet so that you’re getting a wide variety of stuff,” Zuckerman said. But supplementing social media with more traditional media is only one piece of the puzzle. Social media companies themselves have a large responsibility to ensure that the information on their platforms is not spreading dis-, mis- and malinformation.

Social media companies have adopted some measures to combat misinformation. Meta partners with third-party fact-checking organizations on Facebook, Instagram and most recently Threads to review the accuracy of posts and stories. Content identified as false is labeled as misinformation and distribution is reduced. This type of fact-checking became prevalent after COVID-19 misinformation spread across the platform during the pandemic.

Like Meta, TikTok adopted a global fact-checking program to assess the accuracy of content posted to the platform. If content is flagged as harmful misinformation, TikTok will remove the video or restrict its distribution. X rebranded its old fact-checking platform, Birdwatch, to Community Notes once Musk took over in 2022. Community Notes allows users to submit helpful context to posts that could be misleading.

Social media feeds are not designed for facts and news, they “are optimized to keep you engaged,” Zuckerman said. “They’re optimized to keep you participating, keep you clicking.” The hold social media companies have on users is large, especially when algorithms are built to confirm a user’s biases and beliefs. There are a few ways platforms can better serve those young users who are heavily reliant on social media for their news though.

“You could build social media feeds that aim for diversity,” Zuckerman said. For example, he described an algorithm that would suggest a few Republicans for the user to follow if it noticed a user followed Democrats exclusively. If the algorithm saw a user following a lot of Americans, it would recommend users and accounts that provide the person with a more global view.

“People might not enjoy it as much as they enjoy their current confirmations, but you could imagine it being civically useful, you could imagine it giving you a wider view of the world. We just haven't seen much of it,” Zuckerman added.

De-ranking is another method social media companies can take against content they consider offensive, harmful or extreme. “That’s not censoring it, it’s just making it less prominent,” Best said. De-ranking moves content seen as harmful or false out of the top search results and further down in the algorithm so less people view it. This is a step shy of the more extreme option of deplatforming.

“Deplatforming is sort of the ultimate method that generally platforms would have against content they’re concerned with. That means removing those offending users,” Best said. De-ranking and De-platforming come at the risk of impeding on users’ First Amendment rights though, although the courts have come to competing conclusions so far.

The U.S. Supreme Court heard two cases on the issue earlier this year. A Texas law prevented social media companies from “censoring, banning, demonetizing or otherwise restricting” content strictly because it is a user’s opinion. The court ruled in favor of the state, determining corporations do not have a First Amendment right to censor what people say. A Florida law imposed daily penalties on social media companies that deplatformed political candidates or “journalist enterprise.” In that case, the court ruled in favor of the social media companies, which as private entities are entitled to moderate content. A ruling expected by the end of June will impact both the content users can post and what social media companies can moderate in the future.

Representatives for Meta, TikTok and X did not respond to requests for interviews.

Policymakers are the last piece of the equation, Best said. In the last year, policymakers have started to hold social media companies accountable, with Meta CEO Mark Zuckerberg testifying before Congress earlier this year and Congress passing a bill last month that bans TikTok if it’s not sold by its Chinese owner. Best suggested that policymakers have the ability to hold social media companies accountable for their business models and possibly influence them to move away from algorithms that reward content of concern.

Doing away with algorithms entirely may not change things though, Zuckerman said. “If you get rid of the algorithm altogether, you’re probably going to end up with more and more people isolating themselves because that’s what tends to happen when people have choice,” he said.

Even with all of its flaws, at its best social media serves a purpose for young people and for democracy, Zuckerman argued.

“Democracy requires media,” Zuckerman said. “We have to have the capability of talking to each other and making up our minds about who we want to represent us. So you have to be able to have some space in society where people can have those conversations.”

Read More

Should States Regulate AI?

Rep. Jay Obernolte, R-CA, speaks at an AI conference on Capitol Hill with experts

Provided

Should States Regulate AI?

WASHINGTON —- As House Republicans voted Thursday to pass a 10-year moratorium on AI regulation by states, Rep. Jay Obernolte, R-CA, and AI experts said the measure would be necessary to ensure US dominance in the industry.

“We want to make sure that AI continues to be led by the United States of America, and we want to make sure that our economy and our society realizes the potential benefits of AI deployment,” Obernolte said.

Keep ReadingShow less
The AI Race We Need: For a Better Future, Not Against Another Nation

The concept of AI hovering among the public.

Getty Images, J Studios

The AI Race We Need: For a Better Future, Not Against Another Nation

The AI race that warrants the lion’s share of our attention and resources is not the one with China. Both superpowers should stop hurriedly pursuing AI advances for the sake of “beating” the other. We’ve seen such a race before. Both participants lose. The real race is against an unacceptable status quo: declining lifespans, increasing income inequality, intensifying climate chaos, and destabilizing politics. That status quo will drag on, absent the sorts of drastic improvements AI can bring about. AI may not solve those problems but it may accelerate our ability to improve collective well-being. That’s a race worth winning.

Geopolitical races have long sapped the U.S. of realizing a better future sooner. The U.S. squandered scarce resources and diverted talented staff to close the alleged missile gap with the USSR. President Dwight D. Eisenhower rightfully noted, “Every gun that is made, every warship launched, every rocket fired signifies, in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed.” He realized that every race comes at an immense cost. In this case, the country was “spending the sweat of its laborers, the genius of its scientists, the hopes of its children.”

Keep ReadingShow less
Closeup of Software engineering team engaged in problem-solving and code analysis

Closeup of Software engineering team engaged in problem-solving and code analysis.

Getty Images, MTStock Studio

AI Is Here. Our Laws Are Stuck in the Past.

Artificial intelligence (AI) promises a future once confined to science fiction: personalized medicine accounting for your specific condition, accelerated scientific discovery addressing the most difficult challenges, and reimagined public education designed around AI tutors suited to each student's learning style. We see glimpses of this potential on a daily basis. Yet, as AI capabilities surge forward at exponential speed, the laws and regulations meant to guide them remain anchored in the twentieth century (if not the nineteenth or eighteenth!). This isn't just inefficient; it's dangerously reckless.

For too long, our approach to governing new technologies, including AI, has been one of cautious incrementalism—trying to fit revolutionary tools into outdated frameworks. We debate how century-old privacy torts apply to vast AI training datasets, how liability rules designed for factory machines might cover autonomous systems, or how copyright law conceived for human authors handles AI-generated creations. We tinker around the edges, applying digital patches to analog laws.

Keep ReadingShow less
Nurturing the Next Generation of Journalists
man using MacBook Air

Nurturing the Next Generation of Journalists

“Student journalists are uniquely positioned to take on the challenges of complicating the narrative about how we see each other, putting forward new solutions to how we can work together and have dialogue across difference,” said Maxine Rich, the Program Manager with Common Ground USA. I had the chance to interview her earlier this year about Common Ground Journalism, a new initiative to support students reporting in contentious times.

A partnership with The Fulcrum and the Latino News Network (LNN), I joined Maxine and Nicole Donelan, Program Assistant with Common Ground USA, as co-instructor of the first Common Ground Journalism cohort, which ran for six weeks between January and March 2025.

Keep ReadingShow less