Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Project 2025: Another look at the Federal Communications Commission

FCC seal on a smart phone
Pavlo Gonchar/SOPA Images/LightRocket via Getty Images

Biffle is a podcast host and contributor at BillTrack50.

This is part of a series offering a nonpartisan counter to Project 2025, a conservative guideline to reforming government and policymaking during the first 180 days of a second Trump administration. The Fulcrum's cross partisan analysis of Project 2025 relies on unbiased critical thinking, reexamines outdated assumptions, and uses reason, scientific evidence, and data in analyzing and critiquing Project 2025.

Project 2025, the Heritage Foundation’s policy and personnel proposals for a second Trump administration, has four main goals when it comes to the Federal Communications Commission: reining in Big Tech, promoting national security, unleashing economic prosperity, and ensuring FCC accountability and good governance. Today, we’ll focus on the first of those agenda items.


But first, what is the FCC?

The Federal Communications Commission regulates U.S. communications, promoting free speech, economic growth and equitable access to advanced connectivity. Its goals include supporting diverse viewpoints, job creation, secure networks, updated infrastructure, prudent use of taxpayer money and “ensuring that every American has a fair shot at next-generation connectivity.” The FCC is an independent agency led by five president-appointed commissioners (including a chair who sets the overall agenda) serving five-year terms, with typically three aligning with the president's party.

A significant portion of the FCC's budget ($390.2 million requested in 2023) is self-funded, coming from regulatory fees and spectrum auction revenue. The agency's specialized bureaus focus on 5G transitions, net neutrality and FCC-licensed entity mergers. It also manages the Universal Service Fund, which supports rural broadband, low-income programs, and connectivity for schools and health care facilities.

The FCC plays a pivotal role in regulating Big Tech companies like Meta, Google and X, which significantly influence public discourse and market dynamics. These companies are often criticized for using their market dominance, which many feel is enabled by favorable regulations, to suppress diverse political viewpoints and for not paying a fair share towards programs that benefit them.

Project 2025 has several proposed initiatives aiming to address these issues:

Reform of how Section 230 is interpreted: Section 230 of the Communications Decency Act provides websites, including social media platforms, with immunity from liability for content posted by users. Project 2025 proposes the FCC clarify this immunity, suggesting that it does not apply universally to all content decisions, and thus guidelines to delineate when these protections are appropriate should be considered.

Implement new transparency rules: The report recommends the FCC impose transparency requirements on Big Tech, similar to those for broadband providers, and require mandatory disclosures about content moderation policies and practices. In addition, it calls on the agency to create transparent appeals processes for content removal decisions.

Legislative changes: Project 2025 wants the FCC to work with Congress to ensure "Internet companies no longer have carte blanche to censor protected speech while maintaining their Section 230 protections." Solutions could include introducing anti-discrimination provisions to prevent bias or censorship of political viewpoints

The report calls for passage of several bills related to Section 230:

protections for consumers online.

Two states have already passed related legislation:

  • Texas prohibits companies from removing content based on an author’s viewpoint.
  • Florida bars social media companies from removing politicians from their site.

Further empower consumers: Project 2025 wants the FCC and Congress to prioritize "user control" as an express policy goal. Section 230 does encourage platforms to provide tools for users to moderate content themselves, including choosing content filters and fact-checkers. It also advocates for stricter age verification measures.

Require fair contribution to the Universal Service Fund: Finally, Project 2025 wants the FCC to establish regulations requiring Big Tech companies to pay their “fair share”into the USF. Currently, the USF is funded by charges on traditional telecommunications services, an outdated model as internet usage shifts to broadband. Big Tech is not currently required to contribute to this fund.

Is Project 2025 justified in seeking these changes?

On the surface, Project 2025's proposal to hold Big Tech accountable and "protect free speech" appears justified. There's a broad consensus that Big Tech should not have total immunity and should bear some responsibility for platforms' impact on users and content promotion. However, the implications of these changes could potentially cause more harm than good.

For example, requiring platforms to host all content under anti-discrimination laws could lead to the spread of harmful speech. Broad applications of these rules might limit effective moderation and allow harmful content to spread unchecked, posing risks to public health and increasing abuse and discrimination.

Additionally, the debate over whether internet platforms should be held responsible for the content they host continues across the political spectrum. The courts and Congress must weigh in with respect to balancing the risks of over-moderation. Without careful analysis, unnecessary removal of content due to fear of litigation could have the unintended consequence of allowing illegal or harmful content to thrive.

More articles about Project 2025



    Read More

    An illustration of orange-colored megaphones, one megaphone in the middle is red and facing the opposite direction of the others.

    A growing crisis threatens U.S. public data. Experts warn disappearing federal datasets could undermine science, policy, and democracy—and outline a plan to protect them.

    Getty Images, Richard Drury

    America's Data Crisis: Saving Trusted Facts Is Essential to Democracy

    In March 2026, more than a hundred information and data experts gathered in a converted Christian Science church to confront a problem most Americans never see, but that shapes nearly every public debate we have. The nonprofit Internet Archive convened this national Information Stewardship Forum at their San Francisco headquarters because something fundamental is breaking: the country’s shared foundation of facts.

    For decades, the United States has relied on a vast ecosystem of federal data on health, climate, the economy, education, demographics, scientific research, and more. This data is the backbone of journalism, policymaking, scientific discovery, and public accountability. It is how we know whether the air is safe to breathe, whether unemployment is rising or falling, whether a new disease is spreading, or whether a community is being left behind.

    Keep ReadingShow less
    Man lying in his bed, on his phone at night.

    As the 2026 election approaches, doomscrolling and social media are shaping voter behavior through fear and anxiety. Learn how digital news consumption influences political decisions—and how to break the cycle for more informed voting.

    Getty Images, gorodenkoff

    Americans Are Doomscrolling Their Way to the Ballot Box and Only Getting Empty Promises

    As the spring primary cycle ramps up, voters are deciding which candidates to elect in the November general election, but too much doomscrolling on social media is leading to uninformed — and often anxiety-based — voting. Even though online platforms and politicians may be preying on our exhaustion to further their agendas, we don’t have to fall for it this election cycle.

    Doomscrolling is, unfortunately, part of daily life for many of us. It involves consuming a virtually endless amount of negative social media posts and news content, causing us to feel scared and depressed. Our brains have a hardwired negativity bias that causes us to notice potential threats and focus on them. This is exacerbated by the fact that people who closely follow or participate in politics are more likely to doomscroll.

    Keep ReadingShow less
    The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

    AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

    Getty Images, sarawuth702

    Build Better AI

    Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

    There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

    Keep ReadingShow less
    Government Cyber Security Breach

    An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

    Getty Images, Douglas Rissing

    AI Has Put Humanity on the Ballot

    AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

    AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

    Keep ReadingShow less