Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The end of privacy?

person hacking a website
Bill Hinton/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.

Americans have become accustomed to leaving bread crumbs of personal information scattered across the internet. Our scrolls are tracked. Our website histories are logged. Our searches are analyzed. For a long time, the practice of ignoring this data collection seemed sensible. Who would bother to pick up and reconfigure those crumbs?

In the off chance someone did manage to hoover up some important information about you, the costs seemed manageable. Haven’t we all been notified that our password is insecure or our email has been leaked? The sky didn’t fall for most of us, so we persisted with admittedly lazy but defensible internet behavior.


Artificial intelligence has made what was once defensible a threat to our personal autonomy. Our indifference to data collection now exposes us to long-lasting and significant harms. We now live in the “inference economy,” according to professor Alicia Solow-Niederman. Information that used to be swept up in the tumult of the Internet can now be scrapped, aggregated and exploited to decipher sensitive information about you. As Solow-Niederman explains, “seemingly innocuous or irrelevant data can generate machine learning insights, making it impossible for an individual to anticipate what kinds of data warrant protection.”

Our legal system does not seem ready to protect us. Privacy laws enacted in the early years of the internet reflect a bygone era. They protect bits and pieces of sensitive information but they do not create the sort of broad shield that’s required in an inference economy.

The shortcomings of our current system don’t end there. AI allows a broader set of bad actors to engage in fraudulent and deceptive practices. The fault in this case isn’t the substance of the law — such practices have long been illegal — but rather enforcement of those laws. As more actors learn how to exploit AI, it will become harder and harder for law enforcement to keep pace.

Privacy has been a regulatory weak point for the United States. A federal data privacy law has been discussed for decades and kicked down the road for just as long. This trend must come to an end.

The speed, scale and severity of privacy risks posed by AI require a significant update to our privacy laws and enforcement agencies. Rather than attempt to outline each of those updates, I’ll focus on two key actions.

First, enact a data minimization requirement. In other words, mandate that companies collect and retain only essential information to whatever service they provide to a consumer. Relatedly, companies should delete that information once the service has been rendered. This straightforward provision would reduce the total number of bread crumbs and, consequently, reduce the odds of a bad actor gathering personal and important information about you.

Second, invest in the Office of Technology at the Federal Trade Commission. The FTC plays a key role in identifying emerging unfair and deceptive practices. Whether the agency can perform that important role turns on its expertise and resources. Chair Lina Khan recognized as much when she initially created the office. Congress is now debating how much funding to provide to this essential part of privacy regulation and enforcement. Lawmakers should follow the guidance of a bipartisan group of FTC commissioners and ensure that office can recruit and retain leading experts as well as obtain new technological resources.

It took decades after the introduction of the automobile for the American public to support seat belt requirements. Only after folks like Ralph Nader thoroughly documented that we were unsafe at any speed did popular support squarely come to the side of additional protections. Let’s not wait for decades of privacy catastrophes to realize that we’re currently unsafe upon any scroll. Now’s the time for robust and sustained action to further consumer privacy.

Read More

artificial intelligence

Rather than blame AI for young Americans struggling to find work, we need to build: build new educational institutions, new retraining and upskilling programs, and, most importantly, new firms.

Surasak Suwanmake/Getty Images

Blame AI or Build With AI? Only One Approach Creates Jobs

We’re failing young Americans. Many of them are struggling to find work. Unemployment among 16- to 24-year-olds topped 10.5% in August. Even among those who do find a job, many of them are settling for lower-paying roles. More than 50% of college grads are underemployed. To make matters worse, the path forward to a more stable, lucrative career is seemingly up in the air. High school grads in their twenties find jobs at nearly the same rate as those with four-year degrees.

We have two options: blame or build. The first involves blaming AI, as if this new technology is entirely to blame for the current economic malaise facing Gen Z. This course of action involves slowing or even stopping AI adoption. For example, there’s so-called robot taxes. The thinking goes that by placing financial penalties on firms that lean into AI, there will be more roles left to Gen Z and workers in general. Then there’s the idea of banning or limiting the use of AI in hiring and firing decisions. Applicants who have struggled to find work suggest that increased use of AI may be partially at fault. Others have called for providing workers with a greater say in whether and to what extent their firm uses AI. This may help firms find ways to integrate AI in a way that augments workers rather than replace them.

Keep ReadingShow less
Parv Mehta Is Leading the Fight Against AI Misinformation

A visual representation of deep fake and disinformation concepts, featuring various related keywords in green on a dark background, symbolizing the spread of false information and the impact of artificial intelligence.

Getty Images

Parv Mehta Is Leading the Fight Against AI Misinformation

At a moment when the country is grappling with the civic consequences of rapidly advancing technology, Parv Mehta stands out as one of the most forward‑thinking young leaders of his generation. Recognized as one of the 500 Gen Zers named to the 2025 Carnegie Young Leaders for Civic Preparedness cohort, Mehta represents the kind of grounded, community‑rooted innovator the program was designed to elevate.

A high school student from Washington state, Parv has emerged as a leading youth voice on the dangers of artificial intelligence and deepfakes. He recognized early that his generation would inherit a world where misinformation spreads faster than truth—and where young people are often the most vulnerable targets. Motivated by years of computer science classes and a growing awareness of AI’s risks, he launched a project to educate students across Washington about deepfake technology, media literacy, and digital safety.

Keep ReadingShow less
child holding smartphone

As Australia bans social media for kids under 16, U.S. parents face a harder truth: online safety isn’t an individual choice; it’s a collective responsibility.

Getty Images/Keiko Iwabuchi

Parents Must Quit Infighting to Keep Kids Safe Online

Last week, Australia’s social media ban for children under age 16 officially took effect. It remains to be seen how this law will shape families' behavior; however, it’s at least a stand against the tech takeover of childhood. Here in the U.S., however, we're in a different boat — a consensus on what's best for kids feels much harder to come by among both lawmakers and parents.

In order to make true progress on this issue, we must resist the fallacy of parental individualism – that what you choose for your own child is up to you alone. That it’s a personal, or family, decision to allow smartphones, or certain apps, or social media. But it’s not a personal decision. The choice you make for your family and your kids affects them and their friends, their friends' siblings, their classmates, and so on. If there is no general consensus around parenting decisions when it comes to tech, all kids are affected.

Keep ReadingShow less
Someone wrapping a gift.

As screens replace toys, childhood is being gamified. What this shift means for parents, play, development, and holiday gift-giving.

Getty Images, Oscar Wong

The Christmas When Toys Died: The Playtime Paradigm Shift Retailers Failed to See Coming

Something is changing this Christmas, and parents everywhere are feeling it. Bedrooms overflow with toys no one touches, while tablets steal the spotlight, pulling children as young as five into digital worlds that retailers are slow to recognize. The shift is quiet but unmistakable, and many parents are left wondering what toy purchases even make sense anymore.

Research shows that higher screen time correlates with significantly lower engagement in other play activities, mainly traditional, physical, unstructured play. It suggests screen-based play is displacing classic play with traditional toys. Families are experiencing in real time what experts increasingly describe as the rise of “gamified childhoods.”

Keep ReadingShow less