Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The Quiet Rise of Employee Surveillance

Opinion

Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.


Everyone has to weigh data privacy decisions. Delete social media accounts for data privacy or be isolated from friends and family? Do a retina scan at the airport or risk being the uptight person who slows down security check?

But the questions are becoming way more existential, particularly as they invade the workplace. Workers now have to ask a totally different question: Forfeit data or forfeit income?

Because there’s no federal employment law that gives people the option to consent to biometric collection and use, employers can require employees to undergo scanning systems and other biometric applications.

This legal gap exists because, out of the 20 states with privacy laws that regulate private data collection, some still exclude data collected in employment contexts. So, biometric data protection is largely based on where employees live and work, workers’ rights firm Outten & Golden says.

This patchwork of legal protections is worsened by minimal regulation on corporate data collectors. Right now, companies only provide notices about their data collection and use of personal information: Notice and Choice. In this paradigm, people are shown tons of company privacy terms, but the density and legal jargon of those documents leave people bewildered.

And notices do not wholly cover the frontier of consent. As former Director of the Federal Trade Commission’s Bureau of Consumer Protection Samuel Levine stated back in 2019, “Even if we read the policies and understood them, we can hardly exercise choice given how much we rely on digital services, and the lack of competition in many markets.”

A 2023 Pew study backed up Levine’s statements, showing that 67% of Americans don’t understand and 73% believe they have little to no control over what companies do with their data. Clearly, most Americans are making uninformed decisions about the data they give up just to earn a living.

Now, combine that with no option to consent at all, and workers are being strong-armed into funneling their biometric data into a black box. Faced against the risk of being fired or staying unemployed, it becomes a no-brainer decision. Yet the ease of that decision is not a reflection of how much people actually value their personal data.

In a 2025 IPSOS poll, biometric data ranked fourth (32%) in the types of data believed to be most important to keep private. Only financial, health, and credit card usage data ranked higher.

Given this, employers should allow workers the option to indicate these privacy values through choice. Instead, the only two exceptions to bypass surrendering biometrics are religion or disability. That these are the only “outs” tells us that legislators either aren’t aware of, or don’t care about, the privacy preferences of everyday people.

Employers’ reasons for mandating biometrics include building security, tracking employee time and attendance, machine activation, and authenticating users. Because of this, privacy statutes have carveout defenses tied to security, fraud, and crime prevention.

Ironically, corporations’ interest in security stomps out employees’ right to secure their own data. As noted by the Wyoming Law Review in 2024, current case law ignores how an intrusion or breach of employee biometric data opens people up to limitless invasions of privacy in their personal lives.

This should not be the case. States and the federal government should enact laws that eliminate employment contracts that make biometric data a condition of employment. Given existing dubious consent practices, a new form of choice should become normalized: opt in or opt out.


Faith Wilson is a Public Voices Fellow on Technology in the Public Interest with the OpEd Project.


Read More

An illustration of a block with the words, "AI," on it, surrounded by slightly smaller caution signs.

The future of AI should be measured by its impact on ordinary Americans—not just tech executives and investors. Exploring AI inequality, labor concerns, and responsible innovation.

Getty Images, J Studios

The Kayla Test: Exploring How AI Impacts Everyday Americans

We’re failing the Kayla Test and running out of time to pass it. Whether AI goes “well” for the country is not a question anyone in SF or DC can answer. To assess whether AI is truly advancing the interests of Americans, AI stakeholders must engage with more than power users, tokenmaxxers, and Fortune 500 CEOs. A better evaluation is to talk to folks like Kayla, my Lyft driver in Morgantown, WV, and find out what they think about AI. It's a test I stumbled upon while traveling from an AI event at the West Virginia University College of Law to one at Stanford Law.

Kayla asked me what I do for a living. I told her that I’m a law professor focused on AI policy. Those were the last words I said for the remainder of the ride to the airport.

Keep ReadingShow less
Close up of a person on their phone at night.

From “Patriot Games” to The Hunger Games, how spectacle, social media, and political culture risk normalizing violence and eroding empathy.

Getty Images, Westend61

The Capitol Is Counting on Us to Laugh

When the Trump administration announced the Patriot Games, many people laughed. Selecting two children per state for a nationally televised sports competition looked too much like Suzanne Collins’ Hunger Games to take seriously. But that instinct, to laugh rather than look closer, is one the Capitol is counting on. It has always been easier to normalize violence when it arrives dressed as entertainment or patriotism.

Here’s what I mean: The Hunger Games starts with the reaping, the moment when a Capitol official selects two children, one boy and one girl, to fight to the death against tributes from every other district. The games were created as an annual reminder of a failed rebellion, to remind the districts that dissent has consequences. At first, many Capitol residents saw the games as a just punishment. But sentiments shifted as the spectacle grew—when citizens could bet on winners, when a death march transformed into a beauty pageant, when murder became a pathway to celebrity.

Keep ReadingShow less
Technology and Presidential Election

Anthropic’s Mythos AI raises alarms about surveillance, deepfakes, and democracy. Why urgent AI regulation is needed as U.S. policy struggles to keep pace.

Getty Images, Douglas Rissing

How the Latest in AI Threatens Democracy

On April 24, America got a wake-up call from Anthropic, one of the nation’s leading artificial intelligence companies. It announced a new AI tool, called Mythos, that can identify flaws in computer networks and software systems that, as Politico puts it, “Even the brightest human minds have been unable to identify.”

A machine smarter than the “brightest human minds” sounds like a line from a dystopian science fiction movie. And if that weren’t scary enough, we now have a government populated by people who seem oblivious to the risks AI poses to democracy and humanity itself.

Keep ReadingShow less
Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate
the letters are made up of different colors

Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate

This nonpartisan policy brief, written by an ACE fellow, is republished by The Fulcrum as part of our partnership with the Alliance for Civic Engagement and our NextGen initiative — elevating student voices, strengthening civic education, and helping readers better understand democracy and public policy.

Key takeaways

  • The U.S. has no national AI liability law. Instead, a patchwork of state laws has emerged which has resulted in legal protections being dependent on where an individual resides.
  • It’s often unclear who is legally responsible when AI causes harm. This gap leaves many people with no clear path to seek help.
  • In March 2026, the White House and Congress introduced major proposals to establish a federal standard, but there is significant disagreement about whether that standard should prioritize protecting innovation or protecting people harmed by AI systems.

Background: A Patchwork of State Laws

Without a national AI law, states have been filling in the gaps on their own. The result is an uneven landscape where a person’s legal protections depend entirely on which state they live in.

Keep ReadingShow less