Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Opinion

Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.


Recently, Indian health platform Tata 1mg partnered with payroll fintech OneBanc to integrate AI-driven corporate healthcare directly into payroll systems, embedding wellness analytics into routine employment infrastructure rather than treating mental-health support as a separate benefit. Similar deployments are emerging across sectors.

While no public data reliably quantify how many workers use AI wellness tools, market growth and vendor proliferation suggest these systems already reach millions of workers. The market for chatbot-based mental-health apps alone is estimated at $2.1 billion in 2025, projected to grow to $7.5 billion by 2034.

Observers report that AI can potentially enhance workplace wellness by analyzing patterns of employee fatigue, scheduling micro-breaks, and flagging early signs of overload. Tools such as Virtuosis AI can analyze voice and speech patterns during meetings to detect worker stress and emotional strain.

On the surface, these technologies promise care, prevention, and support.

Imagine your supervisor asking, “Would you like to try this new AI tool that helps monitor stress and well-being? Completely optional, of course.”

The offer sounds supportive, even generous. But if you are like most employees, you do not truly feel free to decline. Consent offered in the presence of managerial power is never just consent—it is a performance, often a tacit obligation. And as AI well-being tools seep deeper into workplaces, this illusion of choice becomes even more fragile.

The risks are no longer hypothetical: Amazon has faced public criticism over wellness-framed, productivity-linked workplace monitoring, raising concerns about how well-being rhetoric can justify expanding surveillance.

At the center of this tension is the ideal of informed consent, which for decades has been the ethical backbone of data collection. If people are told what data is gathered, how it will be used, and what risks it carries, then their agreement is considered meaningful. But this model fails when applied to AI-driven well-being tools.

First, informed consent assumes a single and static moment of agreement, while AI systems operate continuously. A worker may click “yes” once, but the system collects behavioral and physiological signals throughout the day—none of which were fully foreseeable when the worker agreed. It seems unfair that consent is a one-time act, yet the data collection continues indefinitely.

Second, the information that workers receive during consent is often inadequate, vague, or too complex. Privacy notices promise that data will be “aggregated,” “anonymized,” or used to “improve engagement”—phrases that obscure the reality that AI systems generate inferences about mood, stress, or disengagement. Even when disclosures are technically correct, they are too complex for workers to meaningfully understand. Workers end up consenting amidst power inequities and socio-organizational complexities.

And then there is consent fatigue. Workers face constant prompts—policy updates, cookie banners, new app permissions. Eventually, one might click “yes” simply to continue working. Consent would rather become a reflex or convenience rather than a choice.

To be sure, workplaces have made meaningful progress in supporting well-being, and AI can genuinely help when implemented thoughtfully.

Many organizations have expanded mental health benefits and adopted flexible or hybrid work models shown to reduce stress and improve work–life balance. Likewise, empirical research suggests AI can indirectly enhance well-being by improving task optimization and workplace safety.

Such advances in workplace AI tools are critical. Yet even with expanded structural support and promising technologies, the mindset around work and worker expectations has not kept pace—shaping how well-being tools are experienced and often making workers feel compelled to say yes, even when framed as “optional.”

Even perfect consent notices cannot overcome workplace power. Workers know that managers control evaluations, promotions, and workloads. Declining a “voluntary” well-being tool can feel risky, even if the consequences are unspoken. Consent becomes a reflection of workplace politics rather than an expression of personal autonomy.

Drawing from feminist theories of sexual consent, the FRIES model of affirmative consent-- Freely given, Reversible, Informed, Enthusiastic, and Specific—provides a sharp lens for evaluating workplace use of AI.

Consent is not freely given when declining feels risky. It is not reversible when withdrawing later invites scrutiny. It is not informed when AI inference is opaque or evolving. It is rarely enthusiastic; many workers say yes out of self-protection. And it is almost never specific; opting into a single function often authorizes far more data collection than workers realize.

This is where the FRIES model offers clarity, echoing the feminist, sex-positive shift from a “no means no” standard to a yes means yes understanding of consent. Consent is not freely given when declining feels unsafe.

It is not reversible when opting out later raises questions. It is not informed when AI inference is opaque. It is rarely enthusiastic; many say yes to avoid negative assumptions. And it is almost never specific; agreeing to one feature often enables a broader system of hidden data tracking.

In our own research on workplace well-being technologies, workers stressed that meaningful consent requires changes not only to the technology but to the policies and organizational practices around it—underscoring that workplace consent is a structural problem—something that requires socio-technical solutions, not just better disclosure screens.

If employers want meaningful consent, they must move beyond checkbox compliance and create conditions where affirmative and continuous consent is truly possible. Participation must be genuinely voluntary.

Opting out must have no social or professional penalty—neither explicit nor implicit. Data practices need to be transparent and auditable. Most importantly, well-being must be grounded in organizational culture—not in the hope that an algorithm can fix structural problems or unrealistic expectations.

The real challenge is not perfecting AI that claims to care for workers but building workplaces where care is already embedded—where consent is real, autonomy is respected, and technology supports people.


Dr. Koustuv Saha is an Assistant Professor of Computer Science at the University of Illinois Urbana-Champaign’s (UIUC) Siebel School of Computing and Data Science and is a Public Voices Fellow of The OpEd Project.

Read More

Meta Undermining Trust but Verify through Paid Links
Facebook launches voting resource tool
Facebook launches voting resource tool

Meta Undermining Trust but Verify through Paid Links

Facebook is testing limits on shared external links, which would become a paid feature through their Meta Verified program, which costs $14.99 per month.

This change solidifies that verification badges are now meaningless signifiers. Yet it wasn’t always so; the verified internet was built to support participation and trust. Beginning with Twitter’s verification program launched in 2009, a checkmark next to a username indicated that an account had been verified to represent a notable person or official account for a business. We could believe that an elected official or a brand name was who they said they were online. When Twitter Blue, and later X Premium, began to support paid blue checkmarks in November of 2022, the visual identification of verification became deceptive. Think Fake Eli Lilly accounts posting about free insulin and impersonation accounts for Elon Musk himself.

This week’s move by Meta echoes changes at Twitter/X, despite the significant evidence that it leaves information quality and user experience in a worse place than before. Despite what Facebook says, all this tells anyone is that you paid.

Keep ReadingShow less
artificial intelligence

Rather than blame AI for young Americans struggling to find work, we need to build: build new educational institutions, new retraining and upskilling programs, and, most importantly, new firms.

Surasak Suwanmake/Getty Images

Blame AI or Build With AI? Only One Approach Creates Jobs

We’re failing young Americans. Many of them are struggling to find work. Unemployment among 16- to 24-year-olds topped 10.5% in August. Even among those who do find a job, many of them are settling for lower-paying roles. More than 50% of college grads are underemployed. To make matters worse, the path forward to a more stable, lucrative career is seemingly up in the air. High school grads in their twenties find jobs at nearly the same rate as those with four-year degrees.

We have two options: blame or build. The first involves blaming AI, as if this new technology is entirely to blame for the current economic malaise facing Gen Z. This course of action involves slowing or even stopping AI adoption. For example, there’s so-called robot taxes. The thinking goes that by placing financial penalties on firms that lean into AI, there will be more roles left to Gen Z and workers in general. Then there’s the idea of banning or limiting the use of AI in hiring and firing decisions. Applicants who have struggled to find work suggest that increased use of AI may be partially at fault. Others have called for providing workers with a greater say in whether and to what extent their firm uses AI. This may help firms find ways to integrate AI in a way that augments workers rather than replace them.

Keep ReadingShow less
Parv Mehta Is Leading the Fight Against AI Misinformation

A visual representation of deep fake and disinformation concepts, featuring various related keywords in green on a dark background, symbolizing the spread of false information and the impact of artificial intelligence.

Getty Images

Parv Mehta Is Leading the Fight Against AI Misinformation

At a moment when the country is grappling with the civic consequences of rapidly advancing technology, Parv Mehta stands out as one of the most forward‑thinking young leaders of his generation. Recognized as one of the 500 Gen Zers named to the 2025 Carnegie Young Leaders for Civic Preparedness cohort, Mehta represents the kind of grounded, community‑rooted innovator the program was designed to elevate.

A high school student from Washington state, Parv has emerged as a leading youth voice on the dangers of artificial intelligence and deepfakes. He recognized early that his generation would inherit a world where misinformation spreads faster than truth—and where young people are often the most vulnerable targets. Motivated by years of computer science classes and a growing awareness of AI’s risks, he launched a project to educate students across Washington about deepfake technology, media literacy, and digital safety.

Keep ReadingShow less
child holding smartphone

As Australia bans social media for kids under 16, U.S. parents face a harder truth: online safety isn’t an individual choice; it’s a collective responsibility.

Getty Images/Keiko Iwabuchi

Parents Must Quit Infighting to Keep Kids Safe Online

Last week, Australia’s social media ban for children under age 16 officially took effect. It remains to be seen how this law will shape families' behavior; however, it’s at least a stand against the tech takeover of childhood. Here in the U.S., however, we're in a different boat — a consensus on what's best for kids feels much harder to come by among both lawmakers and parents.

In order to make true progress on this issue, we must resist the fallacy of parental individualism – that what you choose for your own child is up to you alone. That it’s a personal, or family, decision to allow smartphones, or certain apps, or social media. But it’s not a personal decision. The choice you make for your family and your kids affects them and their friends, their friends' siblings, their classmates, and so on. If there is no general consensus around parenting decisions when it comes to tech, all kids are affected.

Keep ReadingShow less