Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Generative AI and its rapid incorporation into advertising

Generative AI and its rapid incorporation into advertising
Getty Images

Madelyn Sanfilippo is an assistant professor in the School of Information Sciences at the University of Illinois Urbana-Champaign and book series editor for Cambridge Studies on Governing Knowledge Commons. She is a Public Voices Fellow of The OpEd Project.

I often see online advertisements recruiting me to participate in research studies for Sanfilippo Syndrome. Fortunately, I don’t have this rare genetic condition, it’s simply my last name. Just as often, I see ads for college degree programs and toys.


As a professor who studies privacy and technology policy, these ads make sense; systems know things about me, even if imperfectly. I’m not looking to go back to school, but I am a parent who buys lots of toys.

While many of us shrug at personalized ads — maybe even buying one of the many products or services tailored to our wants and needs — the truth is that targeted advertising is unnerving and inappropriate.

It is, after all, the result of surveillance capitalism, or very simply: a system that has every financial incentive to capture, save, and use as much data about us as possible at all times. People are data and data are a commodity. If this is true, we’re not just the users of technology, we’re also the used. Our identities are bought and sold, while we pay for that privilege.

And it’s about to get worse.

As generative artificial intelligence enters this system, we can and should expect to see more —more manipulation, more prediction (which can be both more unnerving and less accurate), and more biased ads.

Imagine ads promoting sales that are targeted to us based on race, or a vague guess what our race or ethnicity is based on stereotypes. Would it be OK for someone to pay more or less for the same items or services based on their race?

Think of the most manipulative ads you’ve ever seen, creating a false sense of urgency or promising incentives, but really including hidden costs. What if every advertisement you ever saw did this? We cannot simply accept this.

Broadly, practices of targeting and prediction around advertising are not new, but the recent evolution of generative AI and its rapid incorporation into advertising — in the past weeks, Google and Meta have rolled out products toward this end — pose new and real challenges.

Generative AI advertisements have the potential to scale even further, target more directly, and to adapt in real-time to increase clicks or purchases based on our behavior, context, and attributes.

Of course, many will see this as an exciting opportunity or the natural extension of the modern digital economy. However, this speaks to profits at the expense of the average person and is a major issue for consumer protection, or, as perhaps we ought to see it, protection of people.

We cannot wait to see the harm unfold and act in a panicked response to the inevitable discriminatory ads or dark patterns that will emerge, as they have from less sophisticated attempts to capture our attention and wallets.

As individuals, we can monitor our ad preferences on many platforms. Google, YouTube, Facebook, Instagram, X, and TikTok, all offer some user control or transparency — but this is not enough.

As a society, we need oversight to address these issues at scale. We need legislation like the bipartisan Digital Consumer Protection Commission Act, introduced by U.S. Sens. Lindsey Graham, R-S.C., and Elizabeth Warren, D-Mass.

Existing consumer protection infrastructure is overcommitted and scant on the necessary technical expertise to evaluate concerns and new uses of AI, like advertising. This proposal will create a new federal agency, bringing together experts who can address privacy concerns, deceptive pricing, and bias due to AI and data brokers in advertising — without putting the burden on us to protect ourselves.

People are not merely users. People are not merely consumers. We are citizens. We are parents and children and sisters and brothers. We are friends. By reducing us to users and consumers, we are less real. If we are data in this discussion, dehumanized and abstracted, it almost becomes a problem of math, which tends to both make people less interested and more accepting of the objectivity of the practices.

When privacy comes up in popular culture, it often focuses on social media alone or perhaps a tradeoff with security in discussion of government surveillance. It’s rarely about advertising, but it needs to be.


Read More

I’m a Former Immigration Lawyer Turned Public School Teacher. Here’s How I’m Engaging Students in Civics.
a dining room table
Photo by Tuyen Vo on Unsplash

I’m a Former Immigration Lawyer Turned Public School Teacher. Here’s How I’m Engaging Students in Civics.

During a recent civics class a student asked me why protests were happening around the country. This student wasn’t being partisan or argumentative. They were just trying to understand what is happening in our democracy right now.

When it comes to teaching civics through current events, the hardest part doesn’t involve breaking up disagreements. Rather, the hardest and incidentally most valuable component is helping students develop meaning from situations as change unfolds on their social media feeds in real time.

Keep ReadingShow less
Digital generated image of green semi transparent AI word on white circuit board visualizing smart technology.

What can the success of SEMATECH teach us about winning the AI race? Explore how a bold U.S. public-private partnership revived the semiconductor industry—and why a similar model could be key to advancing AI innovation today.

Getty Images, Andriy Onufriyenko

A Proven Playbook for AI Leadership: Lessons from America’s Chip Comeback

Imagine waking up to this paragraph in your favorite newspaper:

The willingness of the U.S. government to eschew partisanship and undertake a bold experiment -- an experiment based on cooperation as opposed to traditional procurement, and with accountability standards rooted in trust instead of elaborate regulations -- has led the U.S. to a position of preeminence in an industry which is vital to our nation's security and economic well-being.

Keep ReadingShow less
Protestors holding signs, including one that says "let the people vote."

Attendees hold signs advocating for voting rights and against the SAVE America Act at a rally to outside the U.S. Capitol on March 18, 2026 in Washington, DC.

Getty Images, Heather Diehl

SAVE America Act Debate Begins; Mullin for DHS Hearing

Both chambers of Congress are in session this week and next. The House will probably function about like it has been - lots of votes (often by voice) on uncontroversial bills; many fewer votes on Republican priority bills. Lots of hearings this week and a few legislator updates.

Committee Meetings

Both chambers have a busy week with 64 total committee meetings scheduled.

Keep ReadingShow less
Paul Ehrlich was wrong about everything

Crowd of people walking on a street.

Andy Andrews//Getty Images

Paul Ehrlich was wrong about everything

Biologist and author Paul Ehrlich, the most influential Chicken Little of the last century, died at the age of 93 this week. His 1968 book, “The Population Bomb,” launched decades of institutional panic in government, entertainment and journalism.

Ehrlich’s core neo-Malthusian argument was that overpopulation would exhaust the supply of food and natural resources, leading to a cascade of catastrophes around the world. “The Population Bomb” opens with a bold prediction, “The battle to feed all of humanity is over. In the 1970s and 1980s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.”

Keep ReadingShow less