Skip to content
Search

Latest Stories

Follow Us:
Top Stories

How can social media better inform young users?

Social media apps on a phone
Jonathan Raa/NurPhoto via Getty Images

Downey is an intern for The Fulcrum and a graduate student at Northwestern's Medill School of Journalism.

Social media platforms’ algorithms are tailored to promote content that excites. Even if the viral video of a politician yelling at a constituent was taken out of context or even artificially generated, it doesn’t matter. If it evokes an emotional response it is more likely to show up on other users’ feeds, meaning more views, more likes, more comments and more shares.

Half of the 18-29 year olds in the United States said they had “some or a lot of trust in the information they get from social media sites,” according to one 2022 Pew Research Center study. But if the information they are seeing on these platforms is inaccurate or entirely fabricated, there is the risk that young people — the biggest consumers of social media content — will fall victim to false information.


“The algorithm really does not care about what’s true or what’s helpful or what’s civically engaged. It cares about keeping you entertained and participating,” said Ethan Zuckerman, director of the Initiative for Digital Public Infrastructure at the University of Massachusetts at Amherst.

The responsibility to combat the rise of misinformation on these platforms does not fall to users or social media companies or policymakers, according to Michael Best, a professor of international affairs and interactive computing at Georgia Tech University — it has to be a group effort.

“At this point it’s an all-hands-on-deck kind of challenge because it’s so significant and pervasive. So I would not say that one piece of the equation can fully respond to the challenge,” he said.

The first piece of the equation are the users, according to Best. They should have a personal responsibility to develop what he called their “media consumption literacy.”

“Don’t just trust ‘randos’ because they get a lot of likes or attention,” he said.

Social platforms’ business models are focused on engagement. So those “randos” who have a ton of views are often the result of the algorithm “tipping the scales toward the most exciting content,” Best added. “And exciting, again, often privileges content of concern.”

He and other experts suggest that the best way to counteract the threat of spreading or consuming misinformation is to fact-check content from unknown users or even what’s shared by the people you follow. Comparing social media content to coverage from more traditional news sources that have proven their legitimacy overtime is one of the easiest ways to do this, experts said.

“In general, it’s always good to get multiple points of view. And I think that’s true for news as it is for anything else, so I would be worried about anyone who’s just getting their news on TikTok,” Zuckerman said.

Social media consumers used to find more mainstream news on certain social media platforms.

In the past, Facebook’s referral traffic used to drive many users to news outlets’ websites, but in recent years Facebook and Meta’s other platforms like Instagram have moved away from news and politics. In a Thread last summer, Instagram head Adam Mosseri shared that the negativity associated with the news and politics “is not at all worth the scrutiny.” Just last month, Facebook removed its Facebook News tab in the United States, signifying a massive shift away from news and political content.

X, formerly known as Twitter, also changed how it shares news once Elon Musk took ownership in October 2022. Last year, he announced that X would stop displaying the headlines on links to news articles because he believed “it will greatly improve the esthetics” of tweets, but after complaints from users, headlines were restored earlier this year (although much smaller in size).

While news consumption on other social media sites has declined or stayed stagnant, the share of U.S. TikTok users who get their news on the platform has doubled since 2020. Nearly half of users said “they regularly get news there,” a Pew Research Center study released last month found.

Like consumers of any content, especially algorithmic content, users should “consider diversifying your diet so that you’re getting a wide variety of stuff,” Zuckerman said. But supplementing social media with more traditional media is only one piece of the puzzle. Social media companies themselves have a large responsibility to ensure that the information on their platforms is not spreading dis-, mis- and malinformation.

Social media companies have adopted some measures to combat misinformation. Meta partners with third-party fact-checking organizations on Facebook, Instagram and most recently Threads to review the accuracy of posts and stories. Content identified as false is labeled as misinformation and distribution is reduced. This type of fact-checking became prevalent after COVID-19 misinformation spread across the platform during the pandemic.

Like Meta, TikTok adopted a global fact-checking program to assess the accuracy of content posted to the platform. If content is flagged as harmful misinformation, TikTok will remove the video or restrict its distribution. X rebranded its old fact-checking platform, Birdwatch, to Community Notes once Musk took over in 2022. Community Notes allows users to submit helpful context to posts that could be misleading.

Social media feeds are not designed for facts and news, they “are optimized to keep you engaged,” Zuckerman said. “They’re optimized to keep you participating, keep you clicking.” The hold social media companies have on users is large, especially when algorithms are built to confirm a user’s biases and beliefs. There are a few ways platforms can better serve those young users who are heavily reliant on social media for their news though.

“You could build social media feeds that aim for diversity,” Zuckerman said. For example, he described an algorithm that would suggest a few Republicans for the user to follow if it noticed a user followed Democrats exclusively. If the algorithm saw a user following a lot of Americans, it would recommend users and accounts that provide the person with a more global view.

“People might not enjoy it as much as they enjoy their current confirmations, but you could imagine it being civically useful, you could imagine it giving you a wider view of the world. We just haven't seen much of it,” Zuckerman added.

De-ranking is another method social media companies can take against content they consider offensive, harmful or extreme. “That’s not censoring it, it’s just making it less prominent,” Best said. De-ranking moves content seen as harmful or false out of the top search results and further down in the algorithm so less people view it. This is a step shy of the more extreme option of deplatforming.

“Deplatforming is sort of the ultimate method that generally platforms would have against content they’re concerned with. That means removing those offending users,” Best said. De-ranking and De-platforming come at the risk of impeding on users’ First Amendment rights though, although the courts have come to competing conclusions so far.

The U.S. Supreme Court heard two cases on the issue earlier this year. A Texas law prevented social media companies from “censoring, banning, demonetizing or otherwise restricting” content strictly because it is a user’s opinion. The court ruled in favor of the state, determining corporations do not have a First Amendment right to censor what people say. A Florida law imposed daily penalties on social media companies that deplatformed political candidates or “journalist enterprise.” In that case, the court ruled in favor of the social media companies, which as private entities are entitled to moderate content. A ruling expected by the end of June will impact both the content users can post and what social media companies can moderate in the future.

Representatives for Meta, TikTok and X did not respond to requests for interviews.

Policymakers are the last piece of the equation, Best said. In the last year, policymakers have started to hold social media companies accountable, with Meta CEO Mark Zuckerberg testifying before Congress earlier this year and Congress passing a bill last month that bans TikTok if it’s not sold by its Chinese owner. Best suggested that policymakers have the ability to hold social media companies accountable for their business models and possibly influence them to move away from algorithms that reward content of concern.

Doing away with algorithms entirely may not change things though, Zuckerman said. “If you get rid of the algorithm altogether, you’re probably going to end up with more and more people isolating themselves because that’s what tends to happen when people have choice,” he said.

Even with all of its flaws, at its best social media serves a purpose for young people and for democracy, Zuckerman argued.

“Democracy requires media,” Zuckerman said. “We have to have the capability of talking to each other and making up our minds about who we want to represent us. So you have to be able to have some space in society where people can have those conversations.”


Read More

Meta Undermining Trust but Verify through Paid Links
Facebook launches voting resource tool
Facebook launches voting resource tool

Meta Undermining Trust but Verify through Paid Links

Facebook is testing limits on shared external links, which would become a paid feature through their Meta Verified program, which costs $14.99 per month.

This change solidifies that verification badges are now meaningless signifiers. Yet it wasn’t always so; the verified internet was built to support participation and trust. Beginning with Twitter’s verification program launched in 2009, a checkmark next to a username indicated that an account had been verified to represent a notable person or official account for a business. We could believe that an elected official or a brand name was who they said they were online. When Twitter Blue, and later X Premium, began to support paid blue checkmarks in November of 2022, the visual identification of verification became deceptive. Think Fake Eli Lilly accounts posting about free insulin and impersonation accounts for Elon Musk himself.

This week’s move by Meta echoes changes at Twitter/X, despite the significant evidence that it leaves information quality and user experience in a worse place than before. Despite what Facebook says, all this tells anyone is that you paid.

Keep ReadingShow less
artificial intelligence

Rather than blame AI for young Americans struggling to find work, we need to build: build new educational institutions, new retraining and upskilling programs, and, most importantly, new firms.

Surasak Suwanmake/Getty Images

Blame AI or Build With AI? Only One Approach Creates Jobs

We’re failing young Americans. Many of them are struggling to find work. Unemployment among 16- to 24-year-olds topped 10.5% in August. Even among those who do find a job, many of them are settling for lower-paying roles. More than 50% of college grads are underemployed. To make matters worse, the path forward to a more stable, lucrative career is seemingly up in the air. High school grads in their twenties find jobs at nearly the same rate as those with four-year degrees.

We have two options: blame or build. The first involves blaming AI, as if this new technology is entirely to blame for the current economic malaise facing Gen Z. This course of action involves slowing or even stopping AI adoption. For example, there’s so-called robot taxes. The thinking goes that by placing financial penalties on firms that lean into AI, there will be more roles left to Gen Z and workers in general. Then there’s the idea of banning or limiting the use of AI in hiring and firing decisions. Applicants who have struggled to find work suggest that increased use of AI may be partially at fault. Others have called for providing workers with a greater say in whether and to what extent their firm uses AI. This may help firms find ways to integrate AI in a way that augments workers rather than replace them.

Keep ReadingShow less
Parv Mehta Is Leading the Fight Against AI Misinformation

A visual representation of deep fake and disinformation concepts, featuring various related keywords in green on a dark background, symbolizing the spread of false information and the impact of artificial intelligence.

Getty Images

Parv Mehta Is Leading the Fight Against AI Misinformation

At a moment when the country is grappling with the civic consequences of rapidly advancing technology, Parv Mehta stands out as one of the most forward‑thinking young leaders of his generation. Recognized as one of the 500 Gen Zers named to the 2025 Carnegie Young Leaders for Civic Preparedness cohort, Mehta represents the kind of grounded, community‑rooted innovator the program was designed to elevate.

A high school student from Washington state, Parv has emerged as a leading youth voice on the dangers of artificial intelligence and deepfakes. He recognized early that his generation would inherit a world where misinformation spreads faster than truth—and where young people are often the most vulnerable targets. Motivated by years of computer science classes and a growing awareness of AI’s risks, he launched a project to educate students across Washington about deepfake technology, media literacy, and digital safety.

Keep ReadingShow less
child holding smartphone

As Australia bans social media for kids under 16, U.S. parents face a harder truth: online safety isn’t an individual choice; it’s a collective responsibility.

Getty Images/Keiko Iwabuchi

Parents Must Quit Infighting to Keep Kids Safe Online

Last week, Australia’s social media ban for children under age 16 officially took effect. It remains to be seen how this law will shape families' behavior; however, it’s at least a stand against the tech takeover of childhood. Here in the U.S., however, we're in a different boat — a consensus on what's best for kids feels much harder to come by among both lawmakers and parents.

In order to make true progress on this issue, we must resist the fallacy of parental individualism – that what you choose for your own child is up to you alone. That it’s a personal, or family, decision to allow smartphones, or certain apps, or social media. But it’s not a personal decision. The choice you make for your family and your kids affects them and their friends, their friends' siblings, their classmates, and so on. If there is no general consensus around parenting decisions when it comes to tech, all kids are affected.

Keep ReadingShow less