The Anti-Defamation League has a long history of countering hate, bias, and harmful misinformation, which is to say the organization has some strong opinions about what happens on social media. Continuing our exploration of digital disinformation and free speech, Civic Genius chatted with Lauren Krapf, Technology Policy & Advocacy Counsel at the Anti-Defamation League, about how the ADL thinks tech companies can do better and what misinformation looks like in the real world – both online and off.
Site Navigation
Search
Latest Stories
Start your day right!
Get latest updates and insights delivered to your inbox.
Top Stories
Latest news
Read More
Athens, GA., bookstore battles bans by stocking shelves
Aug 15, 2024
News Ambassadors is working to narrow the partisan divide through a collaborative journalism project to help American communities that hold different political views better understand each other, while giving student reporters a valuable learning experience in the creation of solutions reporting.
A program of the Bridge Alliance Education Fund, News Ambassadors is directed by Shia Levitt, a longtime public radio journalist who has reported for NPR, Marketplace and other outlets. Levitt has also taught radio reporting and audio storytelling at Brooklyn College in New York and at Mills College in Oakland, Calif., as well as for WNYC’s Radio Rookies program and other organizations.
TODAYS FEATURED STUDENT REPORT
Radio station partner WUGA, recently worked with students at the Grady College of Journalism and Mass Communication and the Solutions Journalism Hub of the South, to cover efforts advancing around the country to remove books from the shelves of libraries and schools, and the state of Georgia is no exception. Chloe Beaver, a Grady College senior majoring in journalism and business administration, covered the community of Athens, GA., as part of Charlotte Varnum’s podcasting class.
In her radio piece, she explores how one Athens bookstore is responding to what critics call book censorship.
Enjoy Beaver’s Audio story report.
Sign up for The Fulcrum newsletter
******************************************
HOW NEWS AMBASSADORS WORKS:
News Ambassadors enlists journalism students to help local newsrooms fill gaps in coverage of underreported communities by fostering collaborations between journalism schools and local radio stations to create community-responsive reporting that reflects local concerns and priorities. In the spring, News Ambassadors introduced students to two key reporting strategies:
Solutions journalism: rigorous, evidence-based reporting on how people have responded to existing problems.
Complicating the narratives, a conflict-mediation–informed framework designed to help journalists improve their reporting on polarizing issues.
Student participants report stories informed by one or both of these approaches as well as by local community priorities surfaced during preparatory community engagement work. The best stories are shared with local newsrooms for vetting and potential broadcast.
This collaborative project helps young reporters better understand the perspectives of people outside the bubbles where they live and helps American communities that hold different political views better understand each other. Throughout the process, the project links journalism students to counterparts in politically or demographically dissimilar areas to collaborate on stories exploring solutions to contentious issues.
Keep ReadingShow less
Recommended
Don’t Miss Out
Get latest updates and insights delivered to your inbox.
Readers trust journalists less when they debunk rather than confirm claims
Aug 15, 2024
Stein is an associate professor of marketing at California State Polytechnic University, Pomona. Meyersohn is pursuing an Ed.S. in school psychology California State University, Long Beach.
Pointing out that someone else is wrong is a part of life. And journalists need to do this all the time – their job includes helping sort what’s true from what’s not. But what if people just don’t like hearing corrections?
Our new research, published in the journal Communication Research, suggests that’s the case. In two studies, we found that people generally trust journalists when they confirm claims to be true but are more distrusting when journalists correct false claims.
Some linguistics and social science theories suggest that people intuitively understand social expectations not to be negative. Being disagreeable, like when pointing out someone else’s lie or error, carries with it a risk of backlash.
We reasoned that it follows that corrections are held to a different, more critical standard than confirmations. Attempts to debunk can trigger doubts about journalists’ honesty and motives. In other words, if you’re providing a correction, you’re being a bit of a spoilsport, and that could negatively affect how you are viewed.
How we did our work
Using real articles, we investigated how people feel about journalists who provide “fact checks.”
Sign up for The Fulcrum newsletter
In our first study, participants read a detailed fact check that either corrected or confirmed some claim related to politics or economics. For instance, one focused on the statement, “Congressional salaries have gone up 231% in the past 30 years,” which is false. We then asked participants about how they were evaluating the fact check and the journalist who wrote it.
Although people were fairly trusting of the journalists in general, more people expressed suspicions toward journalists providing corrections than those providing confirmations. People were less likely to be skeptical of confirmatory fact checks than they were of debunking articles, with the percentage of respondents expressing strong distrust doubling from about 10% to about 22%.
People also said they needed more information to know whether journalists debunking statements were telling the truth, compared with their assessment of journalists who were confirming claims.
In a second study, we presented marketing claims that ultimately proved to be true or false. For example, some participants read an article about a brand that said its cooking hacks would save time, but they didn’t actually work. Others read an article about a brand providing cooking hacks that turned about to be genuine.
Again, across several types of products, people thought they needed more evidence in order to believe articles pointing out falsehoods, and they reported distrusting correcting journalists more.
Why it matters
Correcting misinformation is notoriously difficult, as researchers and journalists have found out. The United States is also experiencing a decadeslong decline of trust in journalism. Fact-checking tries to help combat misinformation and disinformation, but our research suggests that there are limits to how much it helps. Providing a debunking might make journalists seem like they’re just being negative.
Our second study also explains a slice of pop culture: the backlash on someone who reveals the misdeeds of another. For example, if you read an article pointing out that a band lied about their origin story, you might notice it seems to create a sub-controversy in the comments of people angry that anyone was called out at all, even correctly. This scenario is exactly what we’d expect if corrections are automatically scrutinized and distrusted by some people.
What’s next
Future work can explore how journalists can be transparent without undermining trust. It’s reasonable to assume that people will trust a journalist more if they explain how they came to a particular conclusion. However, according to our results, that’s not quite the case. Rather, trust is contingent on what the conclusion is.
People in our studies were quite trusting of journalists when they provided confirmations. And, certainly, people are sometimes fine with corrections, as when outlandish misinformation they already disbelieve is debunked. The challenge for journalists may be figuring out how to provide debunkings without seeming like a debunker.
The Research Brief is a short take on interesting academic work.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Keep ReadingShow less
Project 2025: Another look at the Federal Communications Commission
Aug 14, 2024
Biffle is a podcast host and contributor at BillTrack50.
This is part of a series offering a nonpartisan counter to Project 2025, a conservative guideline to reforming government and policymaking during the first 180 days of a second Trump administration. The Fulcrum's cross partisan analysis of Project 2025 relies on unbiased critical thinking, reexamines outdated assumptions, and uses reason, scientific evidence, and data in analyzing and critiquing Project 2025.
Project 2025, the Heritage Foundation’s policy and personnel proposals for a second Trump administration, has four main goals when it comes to the Federal Communications Commission: reining in Big Tech, promoting national security, unleashing economic prosperity, and ensuring FCC accountability and good governance. Today, we’ll focus on the first of those agenda items.
But first, what is the FCC?
The Federal Communications Commission regulates U.S. communications, promoting free speech, economic growth and equitable access to advanced connectivity. Its goals include supporting diverse viewpoints, job creation, secure networks, updated infrastructure, prudent use of taxpayer money and “ensuring that every American has a fair shot at next-generation connectivity.” The FCC is an independent agency led by five president-appointed commissioners (including a chair who sets the overall agenda) serving five-year terms, with typically three aligning with the president's party.
A significant portion of the FCC's budget ($390.2 million requested in 2023) is self-funded, coming from regulatory fees and spectrum auction revenue. The agency's specialized bureaus focus on 5G transitions, net neutrality and FCC-licensed entity mergers. It also manages the Universal Service Fund, which supports rural broadband, low-income programs, and connectivity for schools and health care facilities.
Sign up for The Fulcrum newsletter
The FCC plays a pivotal role in regulating Big Tech companies like Meta, Google and X, which significantly influence public discourse and market dynamics. These companies are often criticized for using their market dominance, which many feel is enabled by favorable regulations, to suppress diverse political viewpoints and for not paying a fair share towards programs that benefit them.
Project 2025 has several proposed initiatives aiming to address these issues:
Reform of how Section 230 is interpreted: Section 230 of the Communications Decency Act provides websites, including social media platforms, with immunity from liability for content posted by users. Project 2025 proposes the FCC clarify this immunity, suggesting that it does not apply universally to all content decisions, and thus guidelines to delineate when these protections are appropriate should be considered.
Implement new transparency rules: The report recommends the FCC impose transparency requirements on Big Tech, similar to those for broadband providers, and require mandatory disclosures about content moderation policies and practices. In addition, it calls on the agency to create transparent appeals processes for content removal decisions.
Legislative changes: Project 2025 wants the FCC to work with Congress to ensure "Internet companies no longer have carte blanche to censor protected speech while maintaining their Section 230 protections." Solutions could include introducing anti-discrimination provisions to prevent bias or censorship of political viewpoints
The report calls for passage of several bills related to Section 230:
- The Break Up Big Tech Act of 2020 aims to limit the immunity granted to interactive computer services under Section 230.
- The Protect Speech Act aims to ensure that immunity under Section 230 incentivizes online platforms to responsibly address illegal content.
- The CASE-IT Act seeks to hold Big Tech companies accountable for their content moderation practices.
- The Protecting Americans from Dangerous Algorithms Act would prevent interactive computer services from claiming immunity for claims involving their use of algorithms to rank, promote, recommend, or alter the delivery or display of information.
- The PACT Act would requires transparency, accountability, and
protections for consumers online.
Two states have already passed related legislation:
- Texas prohibits companies from removing content based on an author’s viewpoint.
- Florida bars social media companies from removing politicians from their site.
Further empower consumers: Project 2025 wants the FCC and Congress to prioritize "user control" as an express policy goal. Section 230 does encourage platforms to provide tools for users to moderate content themselves, including choosing content filters and fact-checkers. It also advocates for stricter age verification measures.
Require fair contribution to the Universal Service Fund: Finally, Project 2025 wants the FCC to establish regulations requiring Big Tech companies to pay their “fair share”into the USF.Currently, the USF is funded by charges on traditional telecommunications services, an outdated model as internet usage shifts to broadband. Big Tech is not currently required to contribute to this fund.
Is Project 2025 justified in seeking these changes?
On the surface, Project 2025's proposal to hold Big Tech accountable and "protect free speech" appears justified. There's a broad consensus that Big Tech should not have total immunity and should bear some responsibility for platforms' impact on users and content promotion. However, the implications of these changes could potentially cause more harm than good.
For example, requiring platforms to host all content under anti-discrimination laws could lead to the spread of harmful speech. Broad applications of these rules might limit effective moderation and allow harmful content to spread unchecked, posing risks to public health and increasing abuse and discrimination.
Additionally, the debate over whether internet platforms should be held responsible for the content they host continues across the political spectrum. The courts and Congress must weigh in with respect to balancing the risks of over-moderation. Without careful analysis, unnecessary removal of content due to fear of litigation could have the unintended consequence of allowing illegal or harmful content to thrive.
More articles about Project 2025
- A cross-partisan approach
- An Introduction
- Rumors of Project 2025’s Demise are Greatly Exaggerated
- Department of Education
- Managing the bureaucracy
- Department of Defense
- Department of Energy
- The Environmental Protection Agency
- Education Savings Accounts
- Department of Veterans Affairs
- The Department of Homeland Security
- U.S. Agency for International Development
- Affirmative action
- A federal Parents' Bill of Rights
- Department of Labor
- Intelligence community
- Department of State
- Department of the Interior
- Federal Communications Commission
- A perspective from Europe
- Department of Health and Human Services
- Voting Rights Act
- Another look at the Federal Communications Commission
Keep ReadingShow less
I researched the dark side of social media − and heard the same themes in ‘The Tortured Poets Department’
Aug 14, 2024
Scheinbaum,is an associate professor of marketing as Clemson University.
As an expert in consumer behavior, I recently edited a book about how social media affects mental health.
I’m also a big fan of Taylor Swift.
So when I listened to Swift’s latest album, “The Tortured Poets Department,” I couldn’t help but notice parallels to the research that I’ve been studying for the past decade.
It might seem like an outlandish comparison. What can the bestselling album of 2024 have to do with research into the dark side of social media?
But bear with me: Taylor Swift lives in the same social media-saturated universe as the rest of us. That may be why the melancholic themes of her album resonate with so many people.
With young people out of school for the summer and spending free time on social media, now is a time to put on some tunes and think about mental health and what is called “consumer well-being” in the transformative consumer research area of scholarship.
Here are three Taylor-made takeaways that shed light on some of the themes in my latest edited book, “The Darker Side of Social Media: Consumer Psychology and Mental Health.”
Lesson 1: Modern life through the social media lens can get you down
If you’ve been feeling out of sorts lately, you’re hardly alone: Anxiety and depression can be exacerbated by overuse of social media, research summarized in Chapter 1 shows. And social media use is on the rise.
Sign up for The Fulcrum newsletter
The average American teenager spends nearly five hours every day scrolling TikTok, Instagram and the like, polling shows, while adults clock more than two hours a day on social media. Such could be compulsive social media use and overall overuse.
Digital life can simulate addiction and sometimes manifest as a distinct form of anxiety called “disconnection anxiety,” researchers Line Lervik-Olsen, Bob Fennis and Tor Wallin Andreassen note in their book chapter on compulsive social media use. This can breed feelings of depression – a mood that recurs throughout “The Tortured Poets Department.”
Oftentimes, depression goes hand in hand with feelings of loneliness. Social media has, in some ways, made people feel even lonelier – nearly 4 in 5 Americans say that social media has made social divisions worse, according to Pew Research. In our book chapter, my graduate student Betül Dayan and I consider the prevalence of loneliness in the digital world.
The pandemic showed the world that social media relationships can’t replace physical company. Even celebrities with hundreds of millions of followers simply want someone to be with. In the song “The Prophecy,” Swift sings of loneliness and wanting someone who simply enjoys her presence:
Don’t want money/ Just someone who wants my company ( “The Prophecy”)
Lesson 2: Comparisons will make you miserable
Social media is a breeding ground for comparisons. And since people tend to portray idealized versions of themselves on social media – rather than their authentic selves – these comparisons are often false or skewed. Research has shown that people on social media tend to make “upward comparisons,” judging themselves relative to people they find inspiring. Social media can breed false comparisons, as what someone is aspiring to may not be authentic.
This can lead to what researchers call a “negative self-discrepancy” – a sense of disappointment with one’s failure to meet a personal ideal. As researchers Ashesh Mukherjee and Arani Roy note in their book chapter, social media makes people more dissatisfied with their own sense of control, intelligence and power. This, in turn, can worsen stress and anxiety.
The theme of comparisons comes through loud and clear in the song “The Tortured Poets Department,” in which Swift castigates a partner with literary pretensions – and herself for dating him. Swift may be the most rich, famous and successful pop star on the planet, but comparing yourself with even more heroic figures is sure to make anyone feel worse:
You’re not Dylan Thomas, I’m not Patti Smith. This ain’t the Chelsea Hotel, we’re modern idiots. (“The Tortured Poets Department”)
Lesson 3: Bullying isn’t a minor problem
In today’s social media-focused world, bullying has transitioned to online platforms. And arguably, platforms breed bullying: People are more likely to engage in cruel behavior online than they would face to face.
Policymakers increasingly recognize bullying as an important political concern. In their book chapter, researchers Madison Brown, Kate Pounders and Gary Wilcox have examined laws intended to fight bullying.
One such effort, the Kids Online Safety Act, which among other things would require online platforms to take steps to address cyberbullying, recently passed the U.S. Senate.
Lawmakers aren’t the only ones taking bullying seriously. In her latest album, Swift refers to bullies in her own life as vipers who “disgrace her good name” and who say insults that stick with her for a long time. Themes of reputation and bullying have run throughout Swift’s entire body of work – hardly surprising for someone who has lived such a public life, both online and off.
I’ll tell you something ’bout my good name. It’s mine alone to disgrace. I don’t cater to all these vipers dressed in empath’s clothing. (“But Daddy I Love Him”)
It is not known whether overall social media use or overuse alone causes some of these outcomes, but our research does demonstrate that in many ways there’s a darker side to social media when it comes to consumer well-being – even for celebrities. So if you’re going to see the Eras Tour in Europe this summer, you might want to leave your phone back at the hotel.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Keep ReadingShow less
The end of privacy?
Aug 09, 2024
Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.
Americans have become accustomed to leaving bread crumbs of personal information scattered across the internet. Our scrolls are tracked. Our website histories are logged. Our searches are analyzed. For a long time, the practice of ignoring this data collection seemed sensible. Who would bother to pick up and reconfigure those crumbs?
In the off chance someone did manage to hoover up some important information about you, the costs seemed manageable. Haven’t we all been notified that our password is insecure or our email has been leaked? The sky didn’t fall for most of us, so we persisted with admittedly lazy but defensible internet behavior.
Artificial intelligence has made what was once defensible a threat to our personal autonomy. Our indifference to data collection now exposes us to long-lasting and significant harms. We now live in the “inference economy,” according to professor Alicia Solow-Niederman. Information that used to be swept up in the tumult of the Internet can now be scrapped, aggregated and exploited to decipher sensitive information about you. As Solow-Niederman explains, “seemingly innocuous or irrelevant data can generate machine learning insights, making it impossible for an individual to anticipate what kinds of data warrant protection.”
Sign up for The Fulcrum newsletter
Our legal system does not seem ready to protect us. Privacy laws enacted in the early years of the internet reflect a bygone era. They protect bits and pieces of sensitive information but they do not create the sort of broad shield that’s required in an inference economy.
The shortcomings of our current system don’t end there. AI allows a broader set of bad actors to engage in fraudulent and deceptive practices. The fault in this case isn’t the substance of the law — such practices have long been illegal — but rather enforcement of those laws. As more actors learn how to exploit AI, it will become harder and harder for law enforcement to keep pace.
Privacy has been a regulatory weak point for the United States. A federal data privacy law has been discussed for decades and kicked down the road for just as long. This trend must come to an end.
The speed, scale and severity of privacy risks posed by AI require a significant update to our privacy laws and enforcement agencies. Rather than attempt to outline each of those updates, I’ll focus on two key actions.
First, enact a data minimization requirement. In other words, mandate that companies collect and retain only essential information to whatever service they provide to a consumer. Relatedly, companies should delete that information once the service has been rendered. This straightforward provision would reduce the total number of bread crumbs and, consequently, reduce the odds of a bad actor gathering personal and important information about you.
Second, invest in the Office of Technology at the Federal Trade Commission. The FTC plays a key role in identifying emerging unfair and deceptive practices. Whether the agency can perform that important role turns on its expertise and resources. Chair Lina Khan recognized as much when she initially created the office. Congress is now debating how much funding to provide to this essential part of privacy regulation and enforcement. Lawmakers should follow the guidance of a bipartisan group of FTC commissioners and ensure that office can recruit and retain leading experts as well as obtain new technological resources.
It took decades after the introduction of the automobile for the American public to support seat belt requirements. Only after folks like Ralph Nader thoroughly documented that we were unsafe at any speed did popular support squarely come to the side of additional protections. Let’s not wait for decades of privacy catastrophes to realize that we’re currently unsafe upon any scroll. Now’s the time for robust and sustained action to further consumer privacy.
Keep ReadingShow less
Load More