Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The end of privacy?

person hacking a website
Bill Hinton/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.

Americans have become accustomed to leaving bread crumbs of personal information scattered across the internet. Our scrolls are tracked. Our website histories are logged. Our searches are analyzed. For a long time, the practice of ignoring this data collection seemed sensible. Who would bother to pick up and reconfigure those crumbs?

In the off chance someone did manage to hoover up some important information about you, the costs seemed manageable. Haven’t we all been notified that our password is insecure or our email has been leaked? The sky didn’t fall for most of us, so we persisted with admittedly lazy but defensible internet behavior.


Artificial intelligence has made what was once defensible a threat to our personal autonomy. Our indifference to data collection now exposes us to long-lasting and significant harms. We now live in the “inference economy,” according to professor Alicia Solow-Niederman. Information that used to be swept up in the tumult of the Internet can now be scrapped, aggregated and exploited to decipher sensitive information about you. As Solow-Niederman explains, “seemingly innocuous or irrelevant data can generate machine learning insights, making it impossible for an individual to anticipate what kinds of data warrant protection.”

Our legal system does not seem ready to protect us. Privacy laws enacted in the early years of the internet reflect a bygone era. They protect bits and pieces of sensitive information but they do not create the sort of broad shield that’s required in an inference economy.

The shortcomings of our current system don’t end there. AI allows a broader set of bad actors to engage in fraudulent and deceptive practices. The fault in this case isn’t the substance of the law — such practices have long been illegal — but rather enforcement of those laws. As more actors learn how to exploit AI, it will become harder and harder for law enforcement to keep pace.

Privacy has been a regulatory weak point for the United States. A federal data privacy law has been discussed for decades and kicked down the road for just as long. This trend must come to an end.

The speed, scale and severity of privacy risks posed by AI require a significant update to our privacy laws and enforcement agencies. Rather than attempt to outline each of those updates, I’ll focus on two key actions.

First, enact a data minimization requirement. In other words, mandate that companies collect and retain only essential information to whatever service they provide to a consumer. Relatedly, companies should delete that information once the service has been rendered. This straightforward provision would reduce the total number of bread crumbs and, consequently, reduce the odds of a bad actor gathering personal and important information about you.

Second, invest in the Office of Technology at the Federal Trade Commission. The FTC plays a key role in identifying emerging unfair and deceptive practices. Whether the agency can perform that important role turns on its expertise and resources. Chair Lina Khan recognized as much when she initially created the office. Congress is now debating how much funding to provide to this essential part of privacy regulation and enforcement. Lawmakers should follow the guidance of a bipartisan group of FTC commissioners and ensure that office can recruit and retain leading experts as well as obtain new technological resources.

It took decades after the introduction of the automobile for the American public to support seat belt requirements. Only after folks like Ralph Nader thoroughly documented that we were unsafe at any speed did popular support squarely come to the side of additional protections. Let’s not wait for decades of privacy catastrophes to realize that we’re currently unsafe upon any scroll. Now’s the time for robust and sustained action to further consumer privacy.


Read More

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal
Getty Images, Kmatta

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal

Background

The Health Insurance Portability and Accountability Act (HIPAA) was enacted in 1996 to protect sensitive health information from being disclosed without patients’ consent. Under this act, a patient’s privacy is safeguarded through the enforcement of strict standards on managing, transmitting, and storing health information.

Keep ReadingShow less
Two people looking at screens.

A case for optimism, risk-taking, and policy experimentation in the age of AI—and why pessimism threatens technological progress.

Getty Images, Andriy Onufriyenko

In Defense of AI Optimism

Society needs people to take risks. Entrepreneurs who bet on themselves create new jobs. Institutions that gamble with new processes find out best to integrate advances into modern life. Regulators who accept potential backlash by launching policy experiments give us a chance to devise laws that are based on evidence, not fear.

The need for risk taking is all the more important when society is presented with new technologies. When new tech arrives on the scene, defense of the status quo is the easier path--individually, institutionally, and societally. We are all predisposed to think that the calamities, ailments, and flaws we experience today--as bad as they may be--are preferable to the unknowns tied to tomorrow.

Keep ReadingShow less
Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less