Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The NFL Playoffs Are Prime Time for Digital Piracy

Opinion

The NFL Playoffs Are Prime Time for Digital Piracy

Patrick Mahomes #15 of the Kansas City Chiefs celebrates during the first half of the AFC Divisional playoff game against the Houston Texans at GEHA Field at Arrowhead Stadium on January 18, 2025 in Kansas City, Missouri.

(Photo by Aaron M. Sprecher/Getty Images)

The NFL playoffs are an exciting time for football fans to watch the chase for the Super Bowl. It was a uniquely American obsession that has increasingly captured the attention of live sports fans worldwide.

It’s also prime time for live sports piracy, and American lawmakers must enact measures to protect these live broadcasts.


Professional and amateur sports are among the most popular live-streamed content—watched by 61% of viewers who subscribe to streaming services. Yet a study of 6,000 sports fans across 10 countries also found that 51% of the group pirated live sports monthly, despite 89% having at least one streaming subscription.

A 2023 Harvard Business Review study found that 35% of NFL fans surveyed watch football games on pirated streaming services.
Digital piracy costs streaming services companies approximately $30B in annual revenue, which is expected to rise to $113B by 2027. Live sports streaming piracy alone generates an estimated $28B in annual losses.

Many consumers may not sympathize with these streaming companies, but the impact goes much deeper than the boardrooms and stockholders of these companies. Athletic staff and trainers, creatives who produce advertisements, support staff, and thousands of other workers lose when their work is pirated. Piracy costs jobs and threatens the future of creative content, especially when a creator determines their unique work won’t be protected by copyright.

Many countries have taken regulatory action to protect content creators and streaming services. Over fifty countries allow their courts to block websites hosting pirated content through Internet Service Providers (ISPs), effectively shutting them down, including Canada, Italy, and the U.K.

Last year, India strengthened its anti-piracy protections further by criminalizing film pirating and adding significant financial penalties. The European Union recently proposed strengthening its Digital Services Act to fight digital piracy of sporting events and other live entertainment.

The U.S. must adopt a more aggressive approach to identifying, stopping, and prosecuting digital piracy, primarily since so much pirated content is produced in the U.S. by American content creators.

The current Digital Millennium Copyright Act empowers streaming services and copyright owners to send notices to websites identifying pirated content and demanding its removal. Still, the law is powerless against foreign-owned domains. Since so much digital piracy is driven by foreign actors, more must be done to block pirated content overseas.

In 2011, Congress considered building on the Digital Millennium Copyright Act by introducing the Stop Online Piracy Act (SOPA), which would have allowed U.S. Courts to block websites as over 50 other countries allow. However, critics of the legislation were concerned that SOPA infringed on First Amendment rights and would lead to legal trouble for websites like Wikipedia, Google, and YouTube. Critics have also expressed concern about not impinging on the established fair use doctrine that allows copyrighted material to be used under limited circumstances, further chilling free speech.

The blatant theft of publishers', athletes', and creators' works shouldn’t be an issue of free speech, but ensuring legal measures are aimed at websites and streaming services focused on pirating content should strike an appropriate free speech balance. Congress must reconsider legislation allowing pirated content blocking while establishing a process that respects the rights of domains, including dominant providers like YouTube and Google.

The Motion Picture Association (MPA) recently outlined a process that a federal judge would supervise. Copyright holders could request a court order to block specific websites. ISPs and the public would have the opportunity to respond if they disagree. The copyright holder would be responsible for demonstrating that the site primarily engages in piracy. This process is expected to last months rather than years. If a block order is issued, ISPs would decide how best to block consumer access to the site.

This approach would allow streaming companies, publishers, and content creators to protect their content from piracy, preserving job creation and sustaining income while ensuring consumers continue receiving and paying for the content they’ve come to expect and enjoy.

Max Eisendrath is the CEO of Redflag AI.


Read More

The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less