Skip to content
Search

Latest Stories

Follow Us:
Top Stories

AI Wearables and the Rising Risk of Recording Police

Opinion

Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.


Pretti’s death is still under investigation, and the legal facts will be contested. But the moral and political question it raises is already clear. In an era when nearly every citizen carries a camera, the act of observing government force has become both easier and more dangerous. Technology has democratized documentation, but it has also transformed the witness into a perceived threat. The result is a troubling pattern: the very tools that were supposed to make power more accountable are increasingly met with intimidation, targeting, and, in the most extreme cases, lethal force.

This is not an isolated dynamic. The bystanders who filmed George Floyd were initially threatened with arrest. Journalists covering the Atlanta “Cop City” protests have been detained and charged under expansive domestic terrorism statutes. In Portland during the 2020 protests, federal officers repeatedly seized and questioned individuals whose primary “offense” was recording. Abroad, reporters in Gaza and the West Bank have been shot while clearly marked as press. The specifics differ, but the logic is consistent. When cameras proliferate, the state begins to treat the act of seeing as subversive.

Pretti’s case brings that logic home. Federal agents operating in an American city confronted a civilian whose only apparent act was observation. Whether through panic, misjudgment, or institutional culture, the presence of a recording device was treated not as a protected exercise of constitutional liberty, but as a provocation. This is the quiet inversion taking place. The First Amendment once stood as a shield for those who spoke and those who watched. Today, in moments of tension, it is increasingly treated as an obstacle.

Technology has changed the architecture of accountability. In the twentieth century, oversight flowed primarily through institutions: courts, legislatures, and professional media. In the twenty-first century, it flows through networks. A single video can expose misconduct, contradict official statements, and mobilize public scrutiny within hours. For communities that have long experienced disproportionate policing and surveillance, the smartphone has become a tool of self-defense in the civic sense. It is how power is checked when formal channels fail or move too slowly.

But this shift has also created a perverse incentive. When documentation becomes ubiquitous, those who wield force know that every action may be dissected and judged. In that environment, the witness is no longer neutral but becomes a liability. The danger is not only the tragic loss of life in cases like Pretti’s, but the chilling effect that follows. If observing police activity can get you detained, pepper-sprayed, or worse, rational citizens will think twice before lifting their phones. The public square grows quieter, and misconduct becomes easier to hide. This is how a democracy drifts, not quite through the abolition of rights on paper, but through the normalization of fear around exercising them.

If the act of witnessing is now central to how constitutional accountability functions, then the law must evolve to protect it explicitly.

First, Congress should enact a clear federal “Right to Witness” statute. Courts have recognized a First Amendment right to record police, but doctrine alone is insufficient when agents on the ground operate under stress and ambiguity. A statute should make plain that recording or observing law enforcement, including federal agents, is presumptively lawful, and that detention, seizure of devices, or use of force solely on that basis is prohibited. Retaliation against witnesses should carry enhanced civil and criminal penalties, and evidence obtained after unlawful interference with recording should be subject to automatic suppression, much as statements taken in violation of Miranda are.

Second, qualified immunity should not shield officers who use force against individuals engaged in clearly lawful observation. The doctrine was designed to protect reasonable mistakes in fast-moving situations, not to insulate retaliation against constitutional oversight. When the conduct at issue is the suppression of a core First Amendment activity, the legal system should err on the side of accountability.

Third, states and cities should assert their role as constitutional backstops when federal operations occur within their borders. “Sanctuary for witnesses” laws could limit cooperation with federal agencies in cases where force is used against civilians engaged in protected recording, and require automatic review by state attorneys general whenever such incidents occur. Federalism should not mean abdication when fundamental liberties are at stake.

Fourth, the legal system should treat bystander footage with the same seriousness as official body-camera recordings. Preservation requirements, chain-of-custody rules, and penalties for destruction or suppression should apply equally. The public’s camera is now part of the evidentiary infrastructure of justice. It deserves institutional protection.

Finally, there should be a clear civil cause of action for obstruction of lawful civic observation. When individuals are targeted, injured, or killed because they were documenting state conduct, they and their families should not have to rely solely on discretionary prosecutions or protracted constitutional litigation. The law should recognize interference with witnessing itself as a distinct and grave harm.

Some state laws already recognize parts of what a federal ‘Right to Witness’ statute would codify. For example, New York’s Civil Rights Law § 79-p explicitly protects the right to document police activity and allows those whose rights are violated to seek damages, a civil remedy that goes beyond mere constitutional claim-making. Several states, such as Colorado, Hawai‘i, and Illinois, also independently protect the right to record police in public in their statutes or constitutions. Yet even in these jurisdictions, officers sometimes detain or seize recording devices, and courts are left to sort out the violations later. This patchwork shows that while recording rights are increasingly recognized, they lack the clear, uniform statutory safeguards against interference, force, and impunity that a federal law would provide.

Beyond statutes and doctrines, there is a cultural shift that must occur. Filming police is often framed as antagonistic, as if the camera were an insult rather than a safeguard. In reality, it is an expression of the same civic impulse that underlies jury service, public trials, and a free press. It is how ordinary people participate in the maintenance of lawful government.

Alex Pretti did not set out to be a symbol. He was a citizen with a phone, recording in a community that promises freedom of speech and freedom of the press. That those freedoms can now place someone in mortal danger should trouble anyone who cares about constitutional democracy. The question his death forces is not only what happened in one encounter, but what kind of political order we are becoming when seeing is treated as a threat.


David M. Hatami is an Offensive Security Project Manager and a Public Voices Fellow on Technology in the Public Interest with The OpEd Project. He previously managed cybersecurity and penetration testing operations at Amazon Web Services and was a fellow with Youth for Privacy.

Editor's Note: This story was updated on 3/30 to accurately reflect the timeliness of past events.


Read More

The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less