Skip to content
Search

Latest Stories

Top Stories

The end of privacy?

person hacking a website
Bill Hinton/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.

Americans have become accustomed to leaving bread crumbs of personal information scattered across the internet. Our scrolls are tracked. Our website histories are logged. Our searches are analyzed. For a long time, the practice of ignoring this data collection seemed sensible. Who would bother to pick up and reconfigure those crumbs?

In the off chance someone did manage to hoover up some important information about you, the costs seemed manageable. Haven’t we all been notified that our password is insecure or our email has been leaked? The sky didn’t fall for most of us, so we persisted with admittedly lazy but defensible internet behavior.


Artificial intelligence has made what was once defensible a threat to our personal autonomy. Our indifference to data collection now exposes us to long-lasting and significant harms. We now live in the “inference economy,” according to professor Alicia Solow-Niederman. Information that used to be swept up in the tumult of the Internet can now be scrapped, aggregated and exploited to decipher sensitive information about you. As Solow-Niederman explains, “seemingly innocuous or irrelevant data can generate machine learning insights, making it impossible for an individual to anticipate what kinds of data warrant protection.”

Sign up for The Fulcrum newsletter

Our legal system does not seem ready to protect us. Privacy laws enacted in the early years of the internet reflect a bygone era. They protect bits and pieces of sensitive information but they do not create the sort of broad shield that’s required in an inference economy.

The shortcomings of our current system don’t end there. AI allows a broader set of bad actors to engage in fraudulent and deceptive practices. The fault in this case isn’t the substance of the law — such practices have long been illegal — but rather enforcement of those laws. As more actors learn how to exploit AI, it will become harder and harder for law enforcement to keep pace.

Privacy has been a regulatory weak point for the United States. A federal data privacy law has been discussed for decades and kicked down the road for just as long. This trend must come to an end.

The speed, scale and severity of privacy risks posed by AI require a significant update to our privacy laws and enforcement agencies. Rather than attempt to outline each of those updates, I’ll focus on two key actions.

First, enact a data minimization requirement. In other words, mandate that companies collect and retain only essential information to whatever service they provide to a consumer. Relatedly, companies should delete that information once the service has been rendered. This straightforward provision would reduce the total number of bread crumbs and, consequently, reduce the odds of a bad actor gathering personal and important information about you.

Second, invest in the Office of Technology at the Federal Trade Commission. The FTC plays a key role in identifying emerging unfair and deceptive practices. Whether the agency can perform that important role turns on its expertise and resources. Chair Lina Khan recognized as much when she initially created the office. Congress is now debating how much funding to provide to this essential part of privacy regulation and enforcement. Lawmakers should follow the guidance of a bipartisan group of FTC commissioners and ensure that office can recruit and retain leading experts as well as obtain new technological resources.

It took decades after the introduction of the automobile for the American public to support seat belt requirements. Only after folks like Ralph Nader thoroughly documented that we were unsafe at any speed did popular support squarely come to the side of additional protections. Let’s not wait for decades of privacy catastrophes to realize that we’re currently unsafe upon any scroll. Now’s the time for robust and sustained action to further consumer privacy.

Read More

"And the Oscar Goes To…": A Divided America
a golden statue of a man standing next to a black wall
Photo by Mirko Fabian on Unsplash

"And the Oscar Goes To…": A Divided America

The Oscars have always been political, but this year, it promises to be one of the most politically charged awards shows in recent memory. It arrives at a time when the White House's dismantling of DEI programs and mass deportation raids have sent a ripple effect through all facets of American life, including Hollywood.

This is why the Dolby Theater, home to the 97th annual Academy Awards, will be the stage for two competing visions of America: one in which artists, not politicians, shape the culture and another in which the presidency seeks to define it.

Keep ReadingShow less
Main Street AI: AI for the People

An illustration of AI chat boxes.

Getty Images, Andriy Onufriyenko

Main Street AI: AI for the People

When Vice President J.D. Vance addressed the Paris AI Summit, he unknowingly made a strong case for public artificial intelligence (AI) infrastructure. His vision—of AI that empowers workers rather than displaces them, enables small businesses to compete with tech giants on a level playing field and delivers benefits to all Americans—cannot be achieved through private industry alone. What's needed is nothing less than an AI equivalent of the interstate highway system: a nationwide network of computational resources, shared data, and technical expertise that democratizes access to this transformative technology.

The challenge is clear. The National AI Opinion Monitor reveals a stark digital divide in AI adoption: higher-income urban professionals increasingly leverage AI tools to enhance their productivity, while rural and lower-income Americans remain largely locked out of the AI economy. Without intervention, AI threatens to become another force multiplier for existing inequalities.

Keep ReadingShow less
Data-based checks and bicameral balancing of Executive Orders
shallow focus photography of computer codes

Data-based checks and bicameral balancing of Executive Orders

The flurry of Presidential Executive Orders attracted plenty of data-based checks in the media. The bad propaganda, rollbacks, and a dip in the President’s approval rating may have been avoided if the US Constitution mandated the Whitehouse to do similar checks before initiating the Executive Orders.

Mandating data-based checks on executive orders ensures that decisions made by the President are rooted in evidence and have a clear, justifiable basis. Data-based checks would ensure that executive orders are issued only after they are scrutinized on their merits, impact, and alignment with the public interest. These checks help prevent orders from being issued on personally or politically motivated priorities or unsubstantiated claims.

Keep ReadingShow less
TikTok: The Aftermath
File:TikTok app.jpg - Wikimedia Commons

TikTok: The Aftermath

When Congress passed PAFACA (Protecting Americans from Foreign Adversary Controlled Applications), they should have considered the consequences. They apparently didn’t.

With approximately 170 million users, what did politicians think would happen when TikTok actually went dark? Did Congress consider the aftermath? President Trump is trying hard to find a way to keep TikTok from going dark permanently, but he likely won’t succeed.

Keep ReadingShow less