Skip to content
Search

Latest Stories

Follow Us:
Top Stories

On Live Facial Recognition in the City: We Are Not Guinea Pigs, and We Are Not Disposable

Opinion

On Live Facial Recognition in the City: We Are Not Guinea Pigs, and We Are Not Disposable

New Orleans fights a facial recognition ordinance as residents warn of privacy risks, mass surveillance, and threats to immigrant communities.

Getty Images, PhanuwatNandee

Every day, I ride my bike down my block in Milan, a tight-knit residential neighborhood in central New Orleans. And every day, a surveillance camera follows me down the block.

Despite the rosy rhetoric of pro-surveillance politicians and facial recognition vendors, that camera doesn’t make me safer. In fact, it puts everyone in New Orleans at risk.


On Aug. 21, a live facial recognition ordinance was withdrawn by the New Orleans City Council, after months of community organizations fighting back and loudly opposing this dangerous ordinance. A council member's office confirmed that it was removed, pending edits, suggesting that a new one will be introduced. If this or a similar surveillance ordinance is approved, Louisiana would become the first state in the nation with a city-wide biometric surveillance network capable of tracking hundreds of thousands of residents in real time.

That’s not a step we want to take. Once invasive surveillance technology like that ends up in the hands of the government, there are no guardrails or oversight mechanisms powerful enough to protect our freedom and our privacy from bad actors, corrupt politicians, hackers, and anyone who doesn’t have our best interest at heart.

Expanding real-time facial recognition to all city cameras would set an unprecedented shift in mass surveillance for the whole country. It would build the infrastructure for a database that would record our facial features, personal characteristics, and our whereabouts, every time we stepped outside our front doors. All of that data, even if eventually deleted, can be used to train artificial intelligence to get better at recognizing and tracking us over time.

Disturbingly, a collection of cameras positioned across New Orleans is already capable of tracking residents’ every move, recording our data, and trying to match our faces to databases of millions of images of people. These cameras were never approved by the people of New Orleans. They were set up by Project NOLA, a crime prevention nonprofit group, which we now know because of bombshell revelations in the Washington Post. Project NOLA has been secretly spying on New Orleans residents with live facial recognition cameras for years. These cameras are at undisclosed locations around the city, and most importantly, police use of this technology has been outlawed since the local community rallied behind a surveillance ban in 2021.

Enough is enough. Time and again, New Orleans has been used as a testing ground for disempowering programs against our Black and brown communities––not only for secretive racist mass surveillance tech but also for a racist charter school system that has deteriorated our youth’s education. We have been treated as a sacrifice zone for oil, gas, and plastic plants to destroy our ecosystem and poison our health, causing us to have the highest rates of cancer in the country. We are not guinea pigs, and we are not disposable.

As an immigrant, I am desperately sounding the alarm about how devastating this surveillance ordinance would be for all New Orleaneans, including our migrant communities. All over the country, our people are being snatched off the street, our families are being separated, and in New Orleans, even our U.S. citizen babies with cancer are being deported. If we roll out real-time facial recognition in New Orleans, we have to expect that our facial recognition data will be demanded by ICE, requested by Louisiana police, or even hacked by anti-immigrant groups—empowering Trump’s agenda of terrorizing and violating our immigrant communities' fundamental human rights.

Instead of doubling down and investing in costly, racist technology, we should refocus on the root causes of crime and harm. 26% of all adults in New Orleans have low literacy levels. At 22.6%, our poverty rate dwarfs the national average of 10%. Dystopian face surveillance doesn’t solve those problems, but it does put us all at risk. The good news is we already know how to do better. Just last week, The Advocate editorialized about the many community programs and nonprofit efforts that are successfully reducing crime in Louisiana year by year.

Our elected officials have a duty to their constituents: to protect our freedoms, defend our dignity, and keep us safe. Our problems can’t be solved with more cameras and surveillance; they have deep systemic roots that have to be addressed.


Edith Romero is a Honduran community organizer with Eye On Surveillance, a researcher, writer, and a Public Voices fellow of The OpEd Project, The National Latina Institute for Reproductive Justice, and the Every Page Foundation.


Read More

Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less
Meta Undermining Trust but Verify through Paid Links
Facebook launches voting resource tool
Facebook launches voting resource tool

Meta Undermining Trust but Verify through Paid Links

Facebook is testing limits on shared external links, which would become a paid feature through their Meta Verified program, which costs $14.99 per month.

This change solidifies that verification badges are now meaningless signifiers. Yet it wasn’t always so; the verified internet was built to support participation and trust. Beginning with Twitter’s verification program launched in 2009, a checkmark next to a username indicated that an account had been verified to represent a notable person or official account for a business. We could believe that an elected official or a brand name was who they said they were online. When Twitter Blue, and later X Premium, began to support paid blue checkmarks in November of 2022, the visual identification of verification became deceptive. Think Fake Eli Lilly accounts posting about free insulin and impersonation accounts for Elon Musk himself.

This week’s move by Meta echoes changes at Twitter/X, despite the significant evidence that it leaves information quality and user experience in a worse place than before. Despite what Facebook says, all this tells anyone is that you paid.

Keep ReadingShow less
artificial intelligence

Rather than blame AI for young Americans struggling to find work, we need to build: build new educational institutions, new retraining and upskilling programs, and, most importantly, new firms.

Surasak Suwanmake/Getty Images

Blame AI or Build With AI? Only One Approach Creates Jobs

We’re failing young Americans. Many of them are struggling to find work. Unemployment among 16- to 24-year-olds topped 10.5% in August. Even among those who do find a job, many of them are settling for lower-paying roles. More than 50% of college grads are underemployed. To make matters worse, the path forward to a more stable, lucrative career is seemingly up in the air. High school grads in their twenties find jobs at nearly the same rate as those with four-year degrees.

We have two options: blame or build. The first involves blaming AI, as if this new technology is entirely to blame for the current economic malaise facing Gen Z. This course of action involves slowing or even stopping AI adoption. For example, there’s so-called robot taxes. The thinking goes that by placing financial penalties on firms that lean into AI, there will be more roles left to Gen Z and workers in general. Then there’s the idea of banning or limiting the use of AI in hiring and firing decisions. Applicants who have struggled to find work suggest that increased use of AI may be partially at fault. Others have called for providing workers with a greater say in whether and to what extent their firm uses AI. This may help firms find ways to integrate AI in a way that augments workers rather than replace them.

Keep ReadingShow less
Parv Mehta Is Leading the Fight Against AI Misinformation

A visual representation of deep fake and disinformation concepts, featuring various related keywords in green on a dark background, symbolizing the spread of false information and the impact of artificial intelligence.

Getty Images

Parv Mehta Is Leading the Fight Against AI Misinformation

At a moment when the country is grappling with the civic consequences of rapidly advancing technology, Parv Mehta stands out as one of the most forward‑thinking young leaders of his generation. Recognized as one of the 500 Gen Zers named to the 2025 Carnegie Young Leaders for Civic Preparedness cohort, Mehta represents the kind of grounded, community‑rooted innovator the program was designed to elevate.

A high school student from Washington state, Parv has emerged as a leading youth voice on the dangers of artificial intelligence and deepfakes. He recognized early that his generation would inherit a world where misinformation spreads faster than truth—and where young people are often the most vulnerable targets. Motivated by years of computer science classes and a growing awareness of AI’s risks, he launched a project to educate students across Washington about deepfake technology, media literacy, and digital safety.

Keep ReadingShow less