Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Project 2025: Another look at the Federal Communications Commission

FCC seal on a smart phone
Pavlo Gonchar/SOPA Images/LightRocket via Getty Images

Biffle is a podcast host and contributor at BillTrack50.

This is part of a series offering a nonpartisan counter to Project 2025, a conservative guideline to reforming government and policymaking during the first 180 days of a second Trump administration. The Fulcrum's cross partisan analysis of Project 2025 relies on unbiased critical thinking, reexamines outdated assumptions, and uses reason, scientific evidence, and data in analyzing and critiquing Project 2025.

Project 2025, the Heritage Foundation’s policy and personnel proposals for a second Trump administration, has four main goals when it comes to the Federal Communications Commission: reining in Big Tech, promoting national security, unleashing economic prosperity, and ensuring FCC accountability and good governance. Today, we’ll focus on the first of those agenda items.


But first, what is the FCC?

The Federal Communications Commission regulates U.S. communications, promoting free speech, economic growth and equitable access to advanced connectivity. Its goals include supporting diverse viewpoints, job creation, secure networks, updated infrastructure, prudent use of taxpayer money and “ensuring that every American has a fair shot at next-generation connectivity.” The FCC is an independent agency led by five president-appointed commissioners (including a chair who sets the overall agenda) serving five-year terms, with typically three aligning with the president's party.

A significant portion of the FCC's budget ($390.2 million requested in 2023) is self-funded, coming from regulatory fees and spectrum auction revenue. The agency's specialized bureaus focus on 5G transitions, net neutrality and FCC-licensed entity mergers. It also manages the Universal Service Fund, which supports rural broadband, low-income programs, and connectivity for schools and health care facilities.

The FCC plays a pivotal role in regulating Big Tech companies like Meta, Google and X, which significantly influence public discourse and market dynamics. These companies are often criticized for using their market dominance, which many feel is enabled by favorable regulations, to suppress diverse political viewpoints and for not paying a fair share towards programs that benefit them.

Project 2025 has several proposed initiatives aiming to address these issues:

Reform of how Section 230 is interpreted: Section 230 of the Communications Decency Act provides websites, including social media platforms, with immunity from liability for content posted by users. Project 2025 proposes the FCC clarify this immunity, suggesting that it does not apply universally to all content decisions, and thus guidelines to delineate when these protections are appropriate should be considered.

Implement new transparency rules: The report recommends the FCC impose transparency requirements on Big Tech, similar to those for broadband providers, and require mandatory disclosures about content moderation policies and practices. In addition, it calls on the agency to create transparent appeals processes for content removal decisions.

Legislative changes: Project 2025 wants the FCC to work with Congress to ensure "Internet companies no longer have carte blanche to censor protected speech while maintaining their Section 230 protections." Solutions could include introducing anti-discrimination provisions to prevent bias or censorship of political viewpoints

The report calls for passage of several bills related to Section 230:

protections for consumers online.

Two states have already passed related legislation:

  • Texas prohibits companies from removing content based on an author’s viewpoint.
  • Florida bars social media companies from removing politicians from their site.

Further empower consumers: Project 2025 wants the FCC and Congress to prioritize "user control" as an express policy goal. Section 230 does encourage platforms to provide tools for users to moderate content themselves, including choosing content filters and fact-checkers. It also advocates for stricter age verification measures.

Require fair contribution to the Universal Service Fund: Finally, Project 2025 wants the FCC to establish regulations requiring Big Tech companies to pay their “fair share”into the USF. Currently, the USF is funded by charges on traditional telecommunications services, an outdated model as internet usage shifts to broadband. Big Tech is not currently required to contribute to this fund.

Is Project 2025 justified in seeking these changes?

On the surface, Project 2025's proposal to hold Big Tech accountable and "protect free speech" appears justified. There's a broad consensus that Big Tech should not have total immunity and should bear some responsibility for platforms' impact on users and content promotion. However, the implications of these changes could potentially cause more harm than good.

For example, requiring platforms to host all content under anti-discrimination laws could lead to the spread of harmful speech. Broad applications of these rules might limit effective moderation and allow harmful content to spread unchecked, posing risks to public health and increasing abuse and discrimination.

Additionally, the debate over whether internet platforms should be held responsible for the content they host continues across the political spectrum. The courts and Congress must weigh in with respect to balancing the risks of over-moderation. Without careful analysis, unnecessary removal of content due to fear of litigation could have the unintended consequence of allowing illegal or harmful content to thrive.

More articles about Project 2025



    Read More

    Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

    A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

    Getty Images, Andrew Harnik

    Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

    As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

    AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

    Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

    Keep ReadingShow less
    Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

    Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

    Getty Images, Deagreez

    The Quiet Rise of Employee Surveillance

    Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

    From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

    Keep ReadingShow less
    Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

    Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

    Getty Images, aislan13

    AI Wearables and the Rising Risk of Recording Police

    Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

    In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

    Keep ReadingShow less
    Trump Administration’s Escalating Attacks on Media Raise Concerns about Trust in Media, Self-Censorship

    U.S. President Donald Trump speaks to reporters before boarding Air Force One at Palm Beach International Airport on March 23, 2026 in West Palm Beach, Florida.

    (Photo by Roberto Schmidt/Getty Images)

    Trump Administration’s Escalating Attacks on Media Raise Concerns about Trust in Media, Self-Censorship

    WASHINGTON – Independent journalist Georgia Fort filmed federal agents outside of her home on Jan. 30. They were coming to arrest her in connection with reporting and filming at an anti-ICE protest in Minneapolis, Minn., almost two weeks prior.

    “I don’t feel like I have my First Amendment right as a member of the press,” said Fort in video footage shared with CNN.

    Keep ReadingShow less