Skip to content
Search

Latest Stories

Top Stories

Q&A with Stephanie Bell on the rise of AI and civil society’s response

Q&A with Stephanie Bell on the rise of AI and civil society’s response
Getty Images

With the advent of large scale artificial intelligence systems, the pace of living in our rapidly morphing world is accelerating even more. It is hard to make sense of what AI portends for our lives, our work, our communities, and society writ large. The public discussion of these swirling technological developments focuses on what governments and / or corporations should do (or not) in response to them. But what about civil society? What role should associations and movements operating outside of the constraints facing the public and private sectors play in guiding AI’s path in constructive directions?

I recently caught up with a social sector leader uniquely positioned to help us wrestle with these questions. Stephanie Bell is Senior Research Scientist at the Partnership on AI. The Partnership is a nonprofit whose mission is “bringing diverse voices together across global sectors, disciplines, and demographics so developments in AI advance positive outcomes for people and society.”


I first got to know Stephanie over a decade ago when she was a precociously adept and fun colleague at The Bridgespan Group. She then went on to study at Oxford on a Rhodes Scholarship. Her doctoral research was grounded in ethnographic work with Equal Education, a student movement working to reform the public education system based in Khayelitsha, a township outside of Cape Town, South Africa. She focused on the question of “how people combine their everyday expertise with technocratic knowledge to advocate for change in democratic societies.”

Subsequently, Stephanie rose through the ranks at McKinsey & Company. While there–still an anthropologist, albeit in a very different context– she focused on economic development, the future of work, and the challenges people face in preparing for it. Stephanie left McKinsey to join the Partnership on AI in 2020 with the goal of “setting AI on a better path for workers around the world.”

The following conversation has been lightly edited and condensed for clarity.

Daniel: Stephanie, it is so good to be catching up with you on this important topic. If you’ll forgive me, I want to start with a big question: at the highest level, what do you see as the promise of AI–as well as the potential peril it poses for our society?

Stephanie: Let me take it even higher! To me the right question is: how does AI relate to human flourishing, or the destruction thereof? Any new technology should be considered from the perspective of, what does this contribute to humanity, or the planet? If it doesn't contribute, then it's not genuine progress.

There are some visions of the far off future where we end up with Fully Automated Luxury Communism in which we all are free to lead our lives in the way we see fit. We have all of the material conditions and support that make that possible. We've solved all our collective governance problems through a combination of artificial intelligence and I hope some human ingenuity. As a result, we're thriving. Who could argue with that vision?

But there are an awful lot of hurdles we will have to overcome along the way–both existing models of power and then also the types of power that AI, at least in its current form, is particularly disposed to create. On our current path, the advance of AI has a propensity to become extremely inegalitarian, to exclude the folks who are already somewhat marginalized, to concentrate wealth, and with it power, in the hands of a very small number of companies in a very small number of countries run by a very small number of people. And unless we come up with a different political economy, a different way of thinking about wealth and power, we could end up in a much more serious situation with inequality than we currently find ourselves in.

Daniel: So you are flagging the worsening of inequality as the clear and present danger, not the singularity and Skynet doing us in Terminator-style.

Stephanie: Yes, there are ways for these systems to be horrible that may not involve a malicious superhuman intelligence, or even a superhuman intelligence. Some of the sci-fi versions suggest we think about how humanity reacts to ants. That's how a future hyper intelligent AI system will react to humanity. You just need it to have a careless indifference and things still will work out quite badly for us. Even if the system isn't as smart as we worry it may become, if it's being exploited by bad human actors, then that ends badly too.

Or, even if we put everything in a perfect system where AI can't be exploited, and it does everything the developers want it to do, we could still end up in a place where, as a result of human governance and the failures thereof, AI makes our societies radically more unequal than they currently are, economically and culturally.

Daniel: What do you see as the proper roles for groups outside of government and the market–i.e., in civil society–in responding to AI? How can they help mitigate potential harms and, insofar as it's possible, realize the promise of these new technologies?

Stephanie: Ideally, these technologies end up being created with a much higher degree of diverse participation than they have had so far in their creation. That participation could look like any number of things–coming up with ideas for the systems, defining what a future with different features might look like. It could define use cases, implementation types, identifying different flaws that the relatively homogeneous groups who build AI right now wouldn't necessarily see from their own vantage point.

That is not meant as a critique. There's just a wide variety of experiences of people living on the planet, and each of us is privy to only so many of them at once. To have some blind spots is the nature of all of our existences. Broadening participation so we're able to take more of those ideas and viewpoints into account, to identify and resolve potential conflicts we wouldn't otherwise be able to anticipate, is one way to reduce the likelihood of bad outcomes.

Daniel: I appreciate that you are identifying a near term threat that is much more connected with the problems we already face in the world as AI is coming online. It sounds then like a key role for civil society groups is to increase the perspectives, voices, and participation going into how these systems develop.

Stephanie: Yes, and some groups are particularly well suited to this–community-based organizations representing and speaking on behalf of specific groups. A case in point right now, in a story that's getting a lot of media traction, and rightfully so, is the inclusion of AI considerations in the demands of the Writers’ Guild, the screenwriters striking in Hollywood. That’s an instance where you have a representative body that's articulating the concerns of a wide group of people, from screen writers to show runners to journeymen. They see what AI might mean for their economic livelihood, their passion, their creativity. The WGA is doing exactly what you'd hope any group of workers is doing—using their expertise and experience to identify a middle ground acceptable to workers as well as company leaders. Their goal isn't a ban, it's to find a mutually agreeable solution for incorporating this new set of tools into their workflow.

Daniel: Recognizing it's really early, are there places where you see civil society's response to AI as especially encouraging or promising?

Stephanie: Any place where a group that knows and represents the needs or concerns of a given community as well as challenges with these technologies, that's a good place to start looking. Some groups are further ahead than others. The problematic use of facial recognition is an area where communities have been actively engaged with the help, for example, of Joy Buolamwini and the Algorithmic Justice League. The ACLU has done some coalition building on this as well.

They have not yet had comprehensive victories. The technologies are still being used. They're still resulting in what I would argue are unjust outcomes. But the mode of engagement and participation has created positive change for those communities.

Ideally, this is the sort of work that would start taking place in a prospective fashion rather than reactive one. And that, of course, is much harder when people have other demands on their time and may not know that this issue is one that will affect them down the road.

Daniel: Say more about the Partnership on AI and the work you and your colleagues are endeavoring to do. What is the constructive role you are seeking to play?

Stephanie: Tipping my hand a bit, this is why I keep turning to community and multistakeholder solutions. As an organization, we believe that the more people you have around the table who see and know the issues from all different sides, the more likely we are to surface and resolve problems, to find some common ground, and to advance progress in that specific domain.

Let me give you a specific example. I have a colleague who has been working on and has just launched a framework for Responsible Practices for Synthetic Media. It focuses on the use of generative AI to create deep fake images and videos, all the stuff we see as concerning. For media literacy, critical thinking, democracy, you can name a list of concerns here–not just those held by civil society advocates like Witness, but national media organizations like the CBC in Canada or the BBC in the UK, and also with major AI companies. They are all trying to figure out what responsible use of these technologies looks like. Open AI was also really active in this effort. So were Adobe, Bumble, and TikTok. That's quite a cast of influential actors in the space to get around the table, to identify what the issues are, and to figure out what solutions look like, at least in a preliminary fashion (because all of these things are changing pretty rapidly), while leaving the door open for further conversation. That's our model.

The people inside these companies recognized this problem themselves. Our organization was founded by AI companies who saw their own limitations in being able to identify and then address ethical or responsible AI issues within their own walls. They were like, let's create a consortium. Let's get a group of people facing the same challenges together. And let's make sure that we get civil society involved because without them, we're still going to be in our own echo chamber and may not be landing on the right solutions.

Daniel: Talk about the work you are spearheading at the Partnership concerning AI and the pathways and pitfalls for achieving shared prosperity through it.

Stephanie: Well, we are just about to release a set of Guidelines for AI and Shared Prosperity—a comprehensive job impact assessment that covers impacts on labor markets and on the quality of jobs, as well as responsible practices for AI-creating companies and AI-using organizations, and suggested uses for labor organizations and policymakers. There's been a lot of speculation about AI and its effects on jobs. This is our attempt to explicitly identify the labor impacts decision-makers should be tracking and working to prevent, and to start moving the field towards action on those fronts.

In keeping with our model of engaging multiple stakeholders in research-based work, we have developed this project under the guidance of a Steering Committee representing labor and worker organizations, technologists, academia, and civil society. It is also deeply informed by research I’ve done to learn from workers already using AI in their jobs.

Daniel: Where are the biggest gaps you see at present in civil society’s response to AI, either in funding, group activity, or imagination?

Stephanie: Let me start with what I know is a deeply idealistic response. We don't have many technologies of this type coming from communities that are currently marginalized within the broader economy or society. There is no powerful, cutting-edge AI lab “for the people and by the people.” This is understandable. It requires a tremendous amount of money to make artificial intelligence—to build the models, train the models, and then to run the models. It takes huge amounts of computing power. You need to have very expensive processors and then a whole lot of server power to make this work, which isn't well suited to small community groups or even large community groups. The ACLU for example could not compete with the funding that any given AI lab has.

The US federal government is starting to work on making an AI cloud available to the academic community. I wonder what that could look like from a civil society perspective. What kinds of AI systems would we come up with when we're free from the shareholder capitalism incentives and instead are thinking about our original question: what creates human flourishing? What are the most pressing challenges? How can we tackle them?

But that's a broader comment about power, which AI just happens to sit inside. To take something that's a little less idealistic, part of changing AI for the better is supporting critics who may take stands that technology companies disagree with. There's a fantastic organization called DAIR run by Timnit Gebru that is doing this, for example. The space needs independent organizations like hers taking on watchdog roles.

In addition to funding critics, we also need more of the sort of work we do at the Partnership on AI–getting diverse sets of stakeholders in the room, creating and facilitating space for people to talk about pressing issues, and then driving towards solutions.

A third need involves returning to the question of what does community-driven AI look like? How can we keep it from worsening inequality? What does it take to create a system where technologists are really focused on those sets of concerns?

There are some interesting models like this out there in the gig economy. There's a group called the Driver’s Seat that has built a series of tools for Uber, Lyft, DoorDash, and other drivers to better maximize their earnings and reduce some of the exploitative practices that those companies deploy to squeeze every drop of time and labor out of out of the drivers who are independent contractors for them.

There's also work that Unite Here, the hospitality union, has been doing to improve the apps that hotel housekeepers use to organize their work. The apps have become very widespread in the industry and brought some benefits that housekeepers have embraced, but they create other problems when it comes to sequencing work in ways that don’t make sense to experienced housekeepers. They have always faced so much chaos in their work. Unite Here said, there has to be a better way and they are collaborating with a professor at Carnegie Mellon to figure it out.

The common ingredient in these last two examples is a community that's organized and able to identify a pressing need and really knows the ins and outs of it. Why does the system work as it does? What does it mean for them as workers within it? What would be better? And then they join forces with publicly minded or civically minded technologists who have the know-how to turn that set of ideas and aspirations into a reality for them.

Daniel: Is there any AI-related issue right now that in your view is being overhyped, where we may need less funding and civil society activity in response to it?

Stephanie: That would be a good problem to have. My perception is there is a widespread lack of funding, especially when it comes to supporting community groups to engage on AI or develop their own solutions. If philanthropy is looking to support groups on the ground, there's no scarcity of those who are looking for resources.

One place where there may be a bit of gold rush comes from a very particular worldview which you referred to earlier as the sci fi doomsday scenario. These are the existential risk considerations. Are we building something that's ultimately going to end up killing us, and not that long from now? That issue has a lot of funding. There are big question marks around it now, though, as one of the major funders has been Sam Bankman-Fried.

Daniel: Let’s shift out of the nascent AI field and look out across civil society generally, to the funders and nonprofits working to strengthen democracy, to mitigate climate change, or to alleviate poverty, to take a few examples. What should they be thinking about?

Stephanie: Many of those considerations are going to be affected by whatever AI turns out to be and do. Is AI going to disrupt people's wage earning ability, the availability of jobs and the quality of those jobs? If you're an organization focused on poverty alleviation, there are structural trends that AI is very much wrapped up in that will cause real issues down the road. Getting involved sooner rather than later in a way that anticipates and seeks to address those problems before they start, versus having to be a direct service provider once they’ve played out, would be wise.

The same thing holds with energy usage, environmental considerations, and climate change. There are huge amounts of energy required to run these systems. That's a development that environmental groups should be (and are) anticipating and not reacting to.

But there are opportunities as well. We can use these systems to focus on some of humanity's most pressing needs. There's been awesome advances on that front in healthcare in particular. We are just starting to understand how transformative that could be.

So keeping an eye on the trends would be my biggest piece of advice. To the extent you're starting to hear rumblings about how AI is likely to affect this or that down the road, well, down the road is coming much sooner than you think. And it's worth getting involved in those conversations before the effects actually hit, because that's where you have the opportunity to work with companies to try and prevent those harms before they start.

Daniel: That is a good note to end on. Thank you, Stephanie, for these insights and your critical work. Godspeed on everything you are up to!

This piece originally appeared on The Art of Association.

Read More

Texas counties struggle to process voter registrations using state’s new TEAM system

Brenda Núñez, the Nueces County, Texas, voter registration supervisor, shows the homepage of the TEAM system in her office in Corpus Christi on Sept. 11, 2024. The Texas Secretary of State's Office launched a revamp of the system in July 2025, and election officials across the state have reported various problems that have prevented them from completing essential election preparation tasks.

(Gabriel Cárdenas for Votebeat)

Texas counties struggle to process voter registrations using state’s new TEAM system

Darcy Hood mailed her voter registration application to the Tarrant County elections department in July, after she turned 18.

Months later, her application still hasn’t been processed. And it’s unclear when it will be.

Keep ReadingShow less
In a room full of men, Hegseth called for a military culture shift from ‘woke’ to ‘warrior’

U.S. Secretary of War Pete Hegseth stands at attention at the Pentagon on September 22, 2025 in Arlington, Virginia.

(Photo by Anna Moneymaker/Getty Images)

In a room full of men, Hegseth called for a military culture shift from ‘woke’ to ‘warrior’

Secretary of Defense Pete Hegseth called hundreds of generals and admirals stationed from around the world to convene in Virginia on Tuesday — with about a week’s notice. He announced 10 new directives that would shift the military’s culture away from what he called “woke garbage” and toward a “warrior ethos.”

“This administration has done a great deal since Day 1 to remove the social justice, politically-correct, toxic ideological garbage that had infected our department,” Hegseth said. “No more identity months, DEI offices or dudes in dresses. No more climate change worship. No more division, distraction of gender delusions. No more debris. As I’ve said before and will say, we are done with that shit.”

Keep ReadingShow less
ICE Policy Challenged in Court for Blocking Congressional Oversight of Detention Centers

Federal agents guard outside of a federal building and Immigration and Customs Enforcement (ICE) detention center in downtown Los Angeles as demonstrations continue after a series of immigration raids began last Friday on June 13, 2025, in Los Angeles, California.

Getty Images, Spencer Platt

ICE Policy Challenged in Court for Blocking Congressional Oversight of Detention Centers

In a constitutional democracy, congressional oversight is not a courtesy—it is a cornerstone of the separation of powers enshrined in our founding documents.

Lawyers Defending American Democracy (LDAD) has filed an amicus brief in Neguse v. U.S. Immigration and Customs Enforcement, arguing that ICE’s policy restricting unannounced visits by members of Congress “directly violates federal law.” Twelve lawmakers brought this suit to challenge ICE’s new requirement that elected officials provide seven days’ notice before visiting detention facilities—an edict that undermines transparency and shields executive agencies from scrutiny.

Keep ReadingShow less
How Billionaires Are Rewriting History and Democracy
Getty Images, SvetaZi

How Billionaires Are Rewriting History and Democracy

In the Gilded Age of the millionaire, wealth signified ownership. The titans of old built railroads, monopolized oil, and bought their indulgences in yachts, mansions, and eventually, sports teams. A franchise was the crown jewel: a visible, glamorous token of success. But that era is over. Today’s billionaires, those who tower, not with millions but with unimaginable billions, find sports teams and other baubles beneath them. For this new aristocracy, the true prize is authorship of History (with a capital “H”) itself.

Once you pass a certain threshold of wealth, it seems, mere possessions no longer thrill. At the billionaire’s scale, you wake up in the morning searching for something grand enough to justify your own existence, something commensurate with your supposed singularly historical importance. To buy a team or build another mansion is routine, played, trite. To reshape the very framework of society—now that is a worthy stimulus. That is the game. And increasingly, billionaires are playing it.

Keep ReadingShow less