Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The end of privacy?

person hacking a website
Bill Hinton/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.

Americans have become accustomed to leaving bread crumbs of personal information scattered across the internet. Our scrolls are tracked. Our website histories are logged. Our searches are analyzed. For a long time, the practice of ignoring this data collection seemed sensible. Who would bother to pick up and reconfigure those crumbs?

In the off chance someone did manage to hoover up some important information about you, the costs seemed manageable. Haven’t we all been notified that our password is insecure or our email has been leaked? The sky didn’t fall for most of us, so we persisted with admittedly lazy but defensible internet behavior.


Artificial intelligence has made what was once defensible a threat to our personal autonomy. Our indifference to data collection now exposes us to long-lasting and significant harms. We now live in the “inference economy,” according to professor Alicia Solow-Niederman. Information that used to be swept up in the tumult of the Internet can now be scrapped, aggregated and exploited to decipher sensitive information about you. As Solow-Niederman explains, “seemingly innocuous or irrelevant data can generate machine learning insights, making it impossible for an individual to anticipate what kinds of data warrant protection.”

Our legal system does not seem ready to protect us. Privacy laws enacted in the early years of the internet reflect a bygone era. They protect bits and pieces of sensitive information but they do not create the sort of broad shield that’s required in an inference economy.

The shortcomings of our current system don’t end there. AI allows a broader set of bad actors to engage in fraudulent and deceptive practices. The fault in this case isn’t the substance of the law — such practices have long been illegal — but rather enforcement of those laws. As more actors learn how to exploit AI, it will become harder and harder for law enforcement to keep pace.

Privacy has been a regulatory weak point for the United States. A federal data privacy law has been discussed for decades and kicked down the road for just as long. This trend must come to an end.

The speed, scale and severity of privacy risks posed by AI require a significant update to our privacy laws and enforcement agencies. Rather than attempt to outline each of those updates, I’ll focus on two key actions.

First, enact a data minimization requirement. In other words, mandate that companies collect and retain only essential information to whatever service they provide to a consumer. Relatedly, companies should delete that information once the service has been rendered. This straightforward provision would reduce the total number of bread crumbs and, consequently, reduce the odds of a bad actor gathering personal and important information about you.

Second, invest in the Office of Technology at the Federal Trade Commission. The FTC plays a key role in identifying emerging unfair and deceptive practices. Whether the agency can perform that important role turns on its expertise and resources. Chair Lina Khan recognized as much when she initially created the office. Congress is now debating how much funding to provide to this essential part of privacy regulation and enforcement. Lawmakers should follow the guidance of a bipartisan group of FTC commissioners and ensure that office can recruit and retain leading experts as well as obtain new technological resources.

It took decades after the introduction of the automobile for the American public to support seat belt requirements. Only after folks like Ralph Nader thoroughly documented that we were unsafe at any speed did popular support squarely come to the side of additional protections. Let’s not wait for decades of privacy catastrophes to realize that we’re currently unsafe upon any scroll. Now’s the time for robust and sustained action to further consumer privacy.


Read More

Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less