Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Project 2025: Another look at the Federal Communications Commission

FCC seal on a smart phone
Pavlo Gonchar/SOPA Images/LightRocket via Getty Images

Biffle is a podcast host and contributor at BillTrack50.

This is part of a series offering a nonpartisan counter to Project 2025, a conservative guideline to reforming government and policymaking during the first 180 days of a second Trump administration. The Fulcrum's cross partisan analysis of Project 2025 relies on unbiased critical thinking, reexamines outdated assumptions, and uses reason, scientific evidence, and data in analyzing and critiquing Project 2025.

Project 2025, the Heritage Foundation’s policy and personnel proposals for a second Trump administration, has four main goals when it comes to the Federal Communications Commission: reining in Big Tech, promoting national security, unleashing economic prosperity, and ensuring FCC accountability and good governance. Today, we’ll focus on the first of those agenda items.


But first, what is the FCC?

The Federal Communications Commission regulates U.S. communications, promoting free speech, economic growth and equitable access to advanced connectivity. Its goals include supporting diverse viewpoints, job creation, secure networks, updated infrastructure, prudent use of taxpayer money and “ensuring that every American has a fair shot at next-generation connectivity.” The FCC is an independent agency led by five president-appointed commissioners (including a chair who sets the overall agenda) serving five-year terms, with typically three aligning with the president's party.

A significant portion of the FCC's budget ($390.2 million requested in 2023) is self-funded, coming from regulatory fees and spectrum auction revenue. The agency's specialized bureaus focus on 5G transitions, net neutrality and FCC-licensed entity mergers. It also manages the Universal Service Fund, which supports rural broadband, low-income programs, and connectivity for schools and health care facilities.

The FCC plays a pivotal role in regulating Big Tech companies like Meta, Google and X, which significantly influence public discourse and market dynamics. These companies are often criticized for using their market dominance, which many feel is enabled by favorable regulations, to suppress diverse political viewpoints and for not paying a fair share towards programs that benefit them.

Project 2025 has several proposed initiatives aiming to address these issues:

Reform of how Section 230 is interpreted: Section 230 of the Communications Decency Act provides websites, including social media platforms, with immunity from liability for content posted by users. Project 2025 proposes the FCC clarify this immunity, suggesting that it does not apply universally to all content decisions, and thus guidelines to delineate when these protections are appropriate should be considered.

Implement new transparency rules: The report recommends the FCC impose transparency requirements on Big Tech, similar to those for broadband providers, and require mandatory disclosures about content moderation policies and practices. In addition, it calls on the agency to create transparent appeals processes for content removal decisions.

Legislative changes: Project 2025 wants the FCC to work with Congress to ensure "Internet companies no longer have carte blanche to censor protected speech while maintaining their Section 230 protections." Solutions could include introducing anti-discrimination provisions to prevent bias or censorship of political viewpoints

The report calls for passage of several bills related to Section 230:

protections for consumers online.

Two states have already passed related legislation:

  • Texas prohibits companies from removing content based on an author’s viewpoint.
  • Florida bars social media companies from removing politicians from their site.

Further empower consumers: Project 2025 wants the FCC and Congress to prioritize "user control" as an express policy goal. Section 230 does encourage platforms to provide tools for users to moderate content themselves, including choosing content filters and fact-checkers. It also advocates for stricter age verification measures.

Require fair contribution to the Universal Service Fund: Finally, Project 2025 wants the FCC to establish regulations requiring Big Tech companies to pay their “fair share”into the USF. Currently, the USF is funded by charges on traditional telecommunications services, an outdated model as internet usage shifts to broadband. Big Tech is not currently required to contribute to this fund.

Is Project 2025 justified in seeking these changes?

On the surface, Project 2025's proposal to hold Big Tech accountable and "protect free speech" appears justified. There's a broad consensus that Big Tech should not have total immunity and should bear some responsibility for platforms' impact on users and content promotion. However, the implications of these changes could potentially cause more harm than good.

For example, requiring platforms to host all content under anti-discrimination laws could lead to the spread of harmful speech. Broad applications of these rules might limit effective moderation and allow harmful content to spread unchecked, posing risks to public health and increasing abuse and discrimination.

Additionally, the debate over whether internet platforms should be held responsible for the content they host continues across the political spectrum. The courts and Congress must weigh in with respect to balancing the risks of over-moderation. Without careful analysis, unnecessary removal of content due to fear of litigation could have the unintended consequence of allowing illegal or harmful content to thrive.

More articles about Project 2025



    Read More

    An illustration of a block with the words, "AI," on it, surrounded by slightly smaller caution signs.

    The future of AI should be measured by its impact on ordinary Americans—not just tech executives and investors. Exploring AI inequality, labor concerns, and responsible innovation.

    Getty Images, J Studios

    The Kayla Test: Exploring How AI Impacts Everyday Americans

    We’re failing the Kayla Test and running out of time to pass it. Whether AI goes “well” for the country is not a question anyone in SF or DC can answer. To assess whether AI is truly advancing the interests of Americans, AI stakeholders must engage with more than power users, tokenmaxxers, and Fortune 500 CEOs. A better evaluation is to talk to folks like Kayla, my Lyft driver in Morgantown, WV, and find out what they think about AI. It's a test I stumbled upon while traveling from an AI event at the West Virginia University College of Law to one at Stanford Law.

    Kayla asked me what I do for a living. I told her that I’m a law professor focused on AI policy. Those were the last words I said for the remainder of the ride to the airport.

    Keep ReadingShow less
    Close up of a person on their phone at night.

    From “Patriot Games” to The Hunger Games, how spectacle, social media, and political culture risk normalizing violence and eroding empathy.

    Getty Images, Westend61

    The Capitol Is Counting on Us to Laugh

    When the Trump administration announced the Patriot Games, many people laughed. Selecting two children per state for a nationally televised sports competition looked too much like Suzanne Collins’ Hunger Games to take seriously. But that instinct, to laugh rather than look closer, is one the Capitol is counting on. It has always been easier to normalize violence when it arrives dressed as entertainment or patriotism.

    Here’s what I mean: The Hunger Games starts with the reaping, the moment when a Capitol official selects two children, one boy and one girl, to fight to the death against tributes from every other district. The games were created as an annual reminder of a failed rebellion, to remind the districts that dissent has consequences. At first, many Capitol residents saw the games as a just punishment. But sentiments shifted as the spectacle grew—when citizens could bet on winners, when a death march transformed into a beauty pageant, when murder became a pathway to celebrity.

    Keep ReadingShow less
    Technology and Presidential Election

    Anthropic’s Mythos AI raises alarms about surveillance, deepfakes, and democracy. Why urgent AI regulation is needed as U.S. policy struggles to keep pace.

    Getty Images, Douglas Rissing

    How the Latest in AI Threatens Democracy

    On April 24, America got a wake-up call from Anthropic, one of the nation’s leading artificial intelligence companies. It announced a new AI tool, called Mythos, that can identify flaws in computer networks and software systems that, as Politico puts it, “Even the brightest human minds have been unable to identify.”

    A machine smarter than the “brightest human minds” sounds like a line from a dystopian science fiction movie. And if that weren’t scary enough, we now have a government populated by people who seem oblivious to the risks AI poses to democracy and humanity itself.

    Keep ReadingShow less
    Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate
    the letters are made up of different colors

    Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate

    This nonpartisan policy brief, written by an ACE fellow, is republished by The Fulcrum as part of our partnership with the Alliance for Civic Engagement and our NextGen initiative — elevating student voices, strengthening civic education, and helping readers better understand democracy and public policy.

    Key takeaways

    • The U.S. has no national AI liability law. Instead, a patchwork of state laws has emerged which has resulted in legal protections being dependent on where an individual resides.
    • It’s often unclear who is legally responsible when AI causes harm. This gap leaves many people with no clear path to seek help.
    • In March 2026, the White House and Congress introduced major proposals to establish a federal standard, but there is significant disagreement about whether that standard should prioritize protecting innovation or protecting people harmed by AI systems.

    Background: A Patchwork of State Laws

    Without a national AI law, states have been filling in the gaps on their own. The result is an uneven landscape where a person’s legal protections depend entirely on which state they live in.

    Keep ReadingShow less