Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Project 2025: Another look at the Federal Communications Commission

FCC seal on a smart phone
Pavlo Gonchar/SOPA Images/LightRocket via Getty Images

Biffle is a podcast host and contributor at BillTrack50.

This is part of a series offering a nonpartisan counter to Project 2025, a conservative guideline to reforming government and policymaking during the first 180 days of a second Trump administration. The Fulcrum's cross partisan analysis of Project 2025 relies on unbiased critical thinking, reexamines outdated assumptions, and uses reason, scientific evidence, and data in analyzing and critiquing Project 2025.

Project 2025, the Heritage Foundation’s policy and personnel proposals for a second Trump administration, has four main goals when it comes to the Federal Communications Commission: reining in Big Tech, promoting national security, unleashing economic prosperity, and ensuring FCC accountability and good governance. Today, we’ll focus on the first of those agenda items.


But first, what is the FCC?

The Federal Communications Commission regulates U.S. communications, promoting free speech, economic growth and equitable access to advanced connectivity. Its goals include supporting diverse viewpoints, job creation, secure networks, updated infrastructure, prudent use of taxpayer money and “ensuring that every American has a fair shot at next-generation connectivity.” The FCC is an independent agency led by five president-appointed commissioners (including a chair who sets the overall agenda) serving five-year terms, with typically three aligning with the president's party.

A significant portion of the FCC's budget ($390.2 million requested in 2023) is self-funded, coming from regulatory fees and spectrum auction revenue. The agency's specialized bureaus focus on 5G transitions, net neutrality and FCC-licensed entity mergers. It also manages the Universal Service Fund, which supports rural broadband, low-income programs, and connectivity for schools and health care facilities.

The FCC plays a pivotal role in regulating Big Tech companies like Meta, Google and X, which significantly influence public discourse and market dynamics. These companies are often criticized for using their market dominance, which many feel is enabled by favorable regulations, to suppress diverse political viewpoints and for not paying a fair share towards programs that benefit them.

Project 2025 has several proposed initiatives aiming to address these issues:

Reform of how Section 230 is interpreted: Section 230 of the Communications Decency Act provides websites, including social media platforms, with immunity from liability for content posted by users. Project 2025 proposes the FCC clarify this immunity, suggesting that it does not apply universally to all content decisions, and thus guidelines to delineate when these protections are appropriate should be considered.

Implement new transparency rules: The report recommends the FCC impose transparency requirements on Big Tech, similar to those for broadband providers, and require mandatory disclosures about content moderation policies and practices. In addition, it calls on the agency to create transparent appeals processes for content removal decisions.

Legislative changes: Project 2025 wants the FCC to work with Congress to ensure "Internet companies no longer have carte blanche to censor protected speech while maintaining their Section 230 protections." Solutions could include introducing anti-discrimination provisions to prevent bias or censorship of political viewpoints

The report calls for passage of several bills related to Section 230:

protections for consumers online.

Two states have already passed related legislation:

  • Texas prohibits companies from removing content based on an author’s viewpoint.
  • Florida bars social media companies from removing politicians from their site.

Further empower consumers: Project 2025 wants the FCC and Congress to prioritize "user control" as an express policy goal. Section 230 does encourage platforms to provide tools for users to moderate content themselves, including choosing content filters and fact-checkers. It also advocates for stricter age verification measures.

Require fair contribution to the Universal Service Fund: Finally, Project 2025 wants the FCC to establish regulations requiring Big Tech companies to pay their “fair share”into the USF. Currently, the USF is funded by charges on traditional telecommunications services, an outdated model as internet usage shifts to broadband. Big Tech is not currently required to contribute to this fund.

Is Project 2025 justified in seeking these changes?

On the surface, Project 2025's proposal to hold Big Tech accountable and "protect free speech" appears justified. There's a broad consensus that Big Tech should not have total immunity and should bear some responsibility for platforms' impact on users and content promotion. However, the implications of these changes could potentially cause more harm than good.

For example, requiring platforms to host all content under anti-discrimination laws could lead to the spread of harmful speech. Broad applications of these rules might limit effective moderation and allow harmful content to spread unchecked, posing risks to public health and increasing abuse and discrimination.

Additionally, the debate over whether internet platforms should be held responsible for the content they host continues across the political spectrum. The courts and Congress must weigh in with respect to balancing the risks of over-moderation. Without careful analysis, unnecessary removal of content due to fear of litigation could have the unintended consequence of allowing illegal or harmful content to thrive.

More articles about Project 2025



    Read More

    AI - Its Use, Misuse, and Regulation
    Glowing ai chip on a circuit board.
    Photo by Immo Wegmann on Unsplash

    AI - Its Use, Misuse, and Regulation

    There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.

    First, the good uses. There is no question that AI can outthink human beings, regardless of how famous or knowledgeable, because of the amount of information it can process in a short amount of time. The most powerful accounts I've read have been in the field of medical research: doctors have fed facts into AI, asking for a diagnosis or a possible remedy, and AI has come up with remarkable answers beyond the human mind's capability.

    Keep ReadingShow less
    Overbroad AI Export Controls Risk Forfeiting the AI Race
    a black keyboard with a blue button on it

    Overbroad AI Export Controls Risk Forfeiting the AI Race

    The nation that wins the global AI race will hold decisive military and economic advantages. That’s why President Trump’s January 2025 AI Action Plan declared: “It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

    However, AI global dominance does not just mean producing the best AI systems. It also means that the American “AI Stack” – the layered collection of tools, technologies, and frameworks that organizations use to build, train, deploy, and manage artificial intelligence applications – will become the international standard for this world-changing technology. As such, advancing a commonsense export policy for American AI chips will play a decisive role in determining whether the United States remains embedded at the core of global AI development or is gradually displaced by rival systems.

    Keep ReadingShow less
    Digital generated image of green semi transparent AI word on white circuit board visualizing smart technology.

    What can the success of SEMATECH teach us about winning the AI race? Explore how a bold U.S. public-private partnership revived the semiconductor industry—and why a similar model could be key to advancing AI innovation today.

    Getty Images, Andriy Onufriyenko

    A Proven Playbook for AI Leadership: Lessons from America’s Chip Comeback

    Imagine waking up to this paragraph in your favorite newspaper:

    The willingness of the U.S. government to eschew partisanship and undertake a bold experiment -- an experiment based on cooperation as opposed to traditional procurement, and with accountability standards rooted in trust instead of elaborate regulations -- has led the U.S. to a position of preeminence in an industry which is vital to our nation's security and economic well-being.

    Keep ReadingShow less
    A large group of people is depicted while invisible systems actively scan and analyze individuals within the crowd

    Anthropic’s lawsuit against the Trump administration over a Pentagon “supply-chain risk” label raises major constitutional questions about AI policy, corporate speech, and political retaliation.

    Getty Images, Flavio Coelho

    Anthropic Sues Trump Over ‘Unlawful’ AI Retaliation

    Anthropic’s dispute with the Trump administration is no longer just about AI policy; it has escalated into a constitutional test of whether American companies can uphold their values against political retaliation. After the administration labeled Anthropic a “supply‑chain risk”, a designation historically reserved for foreign adversaries, and ordered federal agencies to cease using its technology, the company did not yield. Instead, Anthropic filed two lawsuits: one in the Northern District of California and another in the D.C. Circuit, each challenging different aspects of the government’s actions and calling them “unprecedented and unlawful.”

    The Pentagon has now formally issued the supply‑chain risk designation, triggering immediate cancellations of federal contracts and jeopardizing “hundreds of millions of dollars” in near‑term revenue. Anthropic’s filings describe the losses as “unrecoverable,” with reputational damage compounding the financial harm. Yet even as the government blacklists the company, the Pentagon continues using Claude in classified systems because the model is deeply embedded in wartime workflows. This contradiction underscores the political nature of the designation: a tool deemed too “dangerous” to be used by federal agencies is simultaneously indispensable in active military operations.

    Keep ReadingShow less