Skip to content
Search

Latest Stories

Top Stories

Q&A with Stephanie Bell on the rise of AI and civil society’s response

Q&A with Stephanie Bell on the rise of AI and civil society’s response
Getty Images

With the advent of large scale artificial intelligence systems, the pace of living in our rapidly morphing world is accelerating even more. It is hard to make sense of what AI portends for our lives, our work, our communities, and society writ large. The public discussion of these swirling technological developments focuses on what governments and / or corporations should do (or not) in response to them. But what about civil society? What role should associations and movements operating outside of the constraints facing the public and private sectors play in guiding AI’s path in constructive directions?

I recently caught up with a social sector leader uniquely positioned to help us wrestle with these questions. Stephanie Bell is Senior Research Scientist at the Partnership on AI. The Partnership is a nonprofit whose mission is “bringing diverse voices together across global sectors, disciplines, and demographics so developments in AI advance positive outcomes for people and society.”


I first got to know Stephanie over a decade ago when she was a precociously adept and fun colleague at The Bridgespan Group. She then went on to study at Oxford on a Rhodes Scholarship. Her doctoral research was grounded in ethnographic work with Equal Education, a student movement working to reform the public education system based in Khayelitsha, a township outside of Cape Town, South Africa. She focused on the question of “how people combine their everyday expertise with technocratic knowledge to advocate for change in democratic societies.”

Sign up for The Fulcrum newsletter

Subsequently, Stephanie rose through the ranks at McKinsey & Company. While there–still an anthropologist, albeit in a very different context– she focused on economic development, the future of work, and the challenges people face in preparing for it. Stephanie left McKinsey to join the Partnership on AI in 2020 with the goal of “setting AI on a better path for workers around the world.”

The following conversation has been lightly edited and condensed for clarity.

Daniel: Stephanie, it is so good to be catching up with you on this important topic. If you’ll forgive me, I want to start with a big question: at the highest level, what do you see as the promise of AI–as well as the potential peril it poses for our society?

Stephanie: Let me take it even higher! To me the right question is: how does AI relate to human flourishing, or the destruction thereof? Any new technology should be considered from the perspective of, what does this contribute to humanity, or the planet? If it doesn't contribute, then it's not genuine progress.

There are some visions of the far off future where we end up with Fully Automated Luxury Communism in which we all are free to lead our lives in the way we see fit. We have all of the material conditions and support that make that possible. We've solved all our collective governance problems through a combination of artificial intelligence and I hope some human ingenuity. As a result, we're thriving. Who could argue with that vision?

But there are an awful lot of hurdles we will have to overcome along the way–both existing models of power and then also the types of power that AI, at least in its current form, is particularly disposed to create. On our current path, the advance of AI has a propensity to become extremely inegalitarian, to exclude the folks who are already somewhat marginalized, to concentrate wealth, and with it power, in the hands of a very small number of companies in a very small number of countries run by a very small number of people. And unless we come up with a different political economy, a different way of thinking about wealth and power, we could end up in a much more serious situation with inequality than we currently find ourselves in.

Daniel: So you are flagging the worsening of inequality as the clear and present danger, not the singularity and Skynet doing us in Terminator-style.

Stephanie: Yes, there are ways for these systems to be horrible that may not involve a malicious superhuman intelligence, or even a superhuman intelligence. Some of the sci-fi versions suggest we think about how humanity reacts to ants. That's how a future hyper intelligent AI system will react to humanity. You just need it to have a careless indifference and things still will work out quite badly for us. Even if the system isn't as smart as we worry it may become, if it's being exploited by bad human actors, then that ends badly too.

Or, even if we put everything in a perfect system where AI can't be exploited, and it does everything the developers want it to do, we could still end up in a place where, as a result of human governance and the failures thereof, AI makes our societies radically more unequal than they currently are, economically and culturally.

Daniel: What do you see as the proper roles for groups outside of government and the market–i.e., in civil society–in responding to AI? How can they help mitigate potential harms and, insofar as it's possible, realize the promise of these new technologies?

Stephanie: Ideally, these technologies end up being created with a much higher degree of diverse participation than they have had so far in their creation. That participation could look like any number of things–coming up with ideas for the systems, defining what a future with different features might look like. It could define use cases, implementation types, identifying different flaws that the relatively homogeneous groups who build AI right now wouldn't necessarily see from their own vantage point.

That is not meant as a critique. There's just a wide variety of experiences of people living on the planet, and each of us is privy to only so many of them at once. To have some blind spots is the nature of all of our existences. Broadening participation so we're able to take more of those ideas and viewpoints into account, to identify and resolve potential conflicts we wouldn't otherwise be able to anticipate, is one way to reduce the likelihood of bad outcomes.

Daniel: I appreciate that you are identifying a near term threat that is much more connected with the problems we already face in the world as AI is coming online. It sounds then like a key role for civil society groups is to increase the perspectives, voices, and participation going into how these systems develop.

Stephanie: Yes, and some groups are particularly well suited to this–community-based organizations representing and speaking on behalf of specific groups. A case in point right now, in a story that's getting a lot of media traction, and rightfully so, is the inclusion of AI considerations in the demands of the Writers’ Guild, the screenwriters striking in Hollywood. That’s an instance where you have a representative body that's articulating the concerns of a wide group of people, from screen writers to show runners to journeymen. They see what AI might mean for their economic livelihood, their passion, their creativity. The WGA is doing exactly what you'd hope any group of workers is doing—using their expertise and experience to identify a middle ground acceptable to workers as well as company leaders. Their goal isn't a ban, it's to find a mutually agreeable solution for incorporating this new set of tools into their workflow.

Daniel: Recognizing it's really early, are there places where you see civil society's response to AI as especially encouraging or promising?

Stephanie: Any place where a group that knows and represents the needs or concerns of a given community as well as challenges with these technologies, that's a good place to start looking. Some groups are further ahead than others. The problematic use of facial recognition is an area where communities have been actively engaged with the help, for example, of Joy Buolamwini and the Algorithmic Justice League. The ACLU has done some coalition building on this as well.

They have not yet had comprehensive victories. The technologies are still being used. They're still resulting in what I would argue are unjust outcomes. But the mode of engagement and participation has created positive change for those communities.

Ideally, this is the sort of work that would start taking place in a prospective fashion rather than reactive one. And that, of course, is much harder when people have other demands on their time and may not know that this issue is one that will affect them down the road.

Daniel: Say more about the Partnership on AI and the work you and your colleagues are endeavoring to do. What is the constructive role you are seeking to play?

Stephanie: Tipping my hand a bit, this is why I keep turning to community and multistakeholder solutions. As an organization, we believe that the more people you have around the table who see and know the issues from all different sides, the more likely we are to surface and resolve problems, to find some common ground, and to advance progress in that specific domain.

Let me give you a specific example. I have a colleague who has been working on and has just launched a framework for Responsible Practices for Synthetic Media. It focuses on the use of generative AI to create deep fake images and videos, all the stuff we see as concerning. For media literacy, critical thinking, democracy, you can name a list of concerns here–not just those held by civil society advocates like Witness, but national media organizations like the CBC in Canada or the BBC in the UK, and also with major AI companies. They are all trying to figure out what responsible use of these technologies looks like. Open AI was also really active in this effort. So were Adobe, Bumble, and TikTok. That's quite a cast of influential actors in the space to get around the table, to identify what the issues are, and to figure out what solutions look like, at least in a preliminary fashion (because all of these things are changing pretty rapidly), while leaving the door open for further conversation. That's our model.

The people inside these companies recognized this problem themselves. Our organization was founded by AI companies who saw their own limitations in being able to identify and then address ethical or responsible AI issues within their own walls. They were like, let's create a consortium. Let's get a group of people facing the same challenges together. And let's make sure that we get civil society involved because without them, we're still going to be in our own echo chamber and may not be landing on the right solutions.

Daniel: Talk about the work you are spearheading at the Partnership concerning AI and the pathways and pitfalls for achieving shared prosperity through it.

Stephanie: Well, we are just about to release a set of Guidelines for AI and Shared Prosperity—a comprehensive job impact assessment that covers impacts on labor markets and on the quality of jobs, as well as responsible practices for AI-creating companies and AI-using organizations, and suggested uses for labor organizations and policymakers. There's been a lot of speculation about AI and its effects on jobs. This is our attempt to explicitly identify the labor impacts decision-makers should be tracking and working to prevent, and to start moving the field towards action on those fronts.

In keeping with our model of engaging multiple stakeholders in research-based work, we have developed this project under the guidance of a Steering Committee representing labor and worker organizations, technologists, academia, and civil society. It is also deeply informed by research I’ve done to learn from workers already using AI in their jobs.

Daniel: Where are the biggest gaps you see at present in civil society’s response to AI, either in funding, group activity, or imagination?

Stephanie: Let me start with what I know is a deeply idealistic response. We don't have many technologies of this type coming from communities that are currently marginalized within the broader economy or society. There is no powerful, cutting-edge AI lab “for the people and by the people.” This is understandable. It requires a tremendous amount of money to make artificial intelligence—to build the models, train the models, and then to run the models. It takes huge amounts of computing power. You need to have very expensive processors and then a whole lot of server power to make this work, which isn't well suited to small community groups or even large community groups. The ACLU for example could not compete with the funding that any given AI lab has.

The US federal government is starting to work on making an AI cloud available to the academic community. I wonder what that could look like from a civil society perspective. What kinds of AI systems would we come up with when we're free from the shareholder capitalism incentives and instead are thinking about our original question: what creates human flourishing? What are the most pressing challenges? How can we tackle them?

But that's a broader comment about power, which AI just happens to sit inside. To take something that's a little less idealistic, part of changing AI for the better is supporting critics who may take stands that technology companies disagree with. There's a fantastic organization called DAIR run by Timnit Gebru that is doing this, for example. The space needs independent organizations like hers taking on watchdog roles.

In addition to funding critics, we also need more of the sort of work we do at the Partnership on AI–getting diverse sets of stakeholders in the room, creating and facilitating space for people to talk about pressing issues, and then driving towards solutions.

A third need involves returning to the question of what does community-driven AI look like? How can we keep it from worsening inequality? What does it take to create a system where technologists are really focused on those sets of concerns?

There are some interesting models like this out there in the gig economy. There's a group called the Driver’s Seat that has built a series of tools for Uber, Lyft, DoorDash, and other drivers to better maximize their earnings and reduce some of the exploitative practices that those companies deploy to squeeze every drop of time and labor out of out of the drivers who are independent contractors for them.

There's also work that Unite Here, the hospitality union, has been doing to improve the apps that hotel housekeepers use to organize their work. The apps have become very widespread in the industry and brought some benefits that housekeepers have embraced, but they create other problems when it comes to sequencing work in ways that don’t make sense to experienced housekeepers. They have always faced so much chaos in their work. Unite Here said, there has to be a better way and they are collaborating with a professor at Carnegie Mellon to figure it out.

The common ingredient in these last two examples is a community that's organized and able to identify a pressing need and really knows the ins and outs of it. Why does the system work as it does? What does it mean for them as workers within it? What would be better? And then they join forces with publicly minded or civically minded technologists who have the know-how to turn that set of ideas and aspirations into a reality for them.

Daniel: Is there any AI-related issue right now that in your view is being overhyped, where we may need less funding and civil society activity in response to it?

Stephanie: That would be a good problem to have. My perception is there is a widespread lack of funding, especially when it comes to supporting community groups to engage on AI or develop their own solutions. If philanthropy is looking to support groups on the ground, there's no scarcity of those who are looking for resources.

One place where there may be a bit of gold rush comes from a very particular worldview which you referred to earlier as the sci fi doomsday scenario. These are the existential risk considerations. Are we building something that's ultimately going to end up killing us, and not that long from now? That issue has a lot of funding. There are big question marks around it now, though, as one of the major funders has been Sam Bankman-Fried.

Daniel: Let’s shift out of the nascent AI field and look out across civil society generally, to the funders and nonprofits working to strengthen democracy, to mitigate climate change, or to alleviate poverty, to take a few examples. What should they be thinking about?

Stephanie: Many of those considerations are going to be affected by whatever AI turns out to be and do. Is AI going to disrupt people's wage earning ability, the availability of jobs and the quality of those jobs? If you're an organization focused on poverty alleviation, there are structural trends that AI is very much wrapped up in that will cause real issues down the road. Getting involved sooner rather than later in a way that anticipates and seeks to address those problems before they start, versus having to be a direct service provider once they’ve played out, would be wise.

The same thing holds with energy usage, environmental considerations, and climate change. There are huge amounts of energy required to run these systems. That's a development that environmental groups should be (and are) anticipating and not reacting to.

But there are opportunities as well. We can use these systems to focus on some of humanity's most pressing needs. There's been awesome advances on that front in healthcare in particular. We are just starting to understand how transformative that could be.

So keeping an eye on the trends would be my biggest piece of advice. To the extent you're starting to hear rumblings about how AI is likely to affect this or that down the road, well, down the road is coming much sooner than you think. And it's worth getting involved in those conversations before the effects actually hit, because that's where you have the opportunity to work with companies to try and prevent those harms before they start.

Daniel: That is a good note to end on. Thank you, Stephanie, for these insights and your critical work. Godspeed on everything you are up to!

This piece originally appeared on The Art of Association.

Read More

Joe Biden being interviewed by Lester Holt

The day after calling on people to “lower the temperature in our politics,” President Biden resort to traditionally divisive language in an interview with NBC's Lester Holt.

YouTube screenshot

One day and 28 minutes

Breslin is the Joseph C. Palamountain Jr. Chair of Political Science at Skidmore College and author of “A Constitution for the Living: Imagining How Five Generations of Americans Would Rewrite the Nation’s Fundamental Law.”

This is the latest in “A Republic, if we can keep it,” a series to assist American citizens on the bumpy road ahead this election year. By highlighting components, principles and stories of the Constitution, Breslin hopes to remind us that the American political experiment remains, in the words of Alexander Hamilton, the “most interesting in the world.”

One day.

One single day. That’s how long it took for President Joe Biden to abandon his call to “lower the temperature in our politics” following the assassination attempt on Donald Trump. “I believe politics ought to be an arena for peaceful debate,” he implored. Not messages tinged with violent language and caustic oratory. Peaceful, dignified, respectful language.

Keep ReadingShow less

Project 2025: The Department of Labor

Hill was policy director for the Center for Humane Technology, co-founder of FairVote and political reform director at New America. You can reach him on X @StevenHill1776.

This is part of a series offering a nonpartisan counter to Project 2025, a conservative guideline to reforming government and policymaking during the first 180 days of a second Trump administration. The Fulcrum's cross partisan analysis of Project 2025 relies on unbiased critical thinking, reexamines outdated assumptions, and uses reason, scientific evidence, and data in analyzing and critiquing Project 2025.

The Heritage Foundation’s Project 2025, a right-wing blueprint for Donald Trump’s return to the White House, is an ambitious manifesto to redesign the federal government and its many administrative agencies to support and sustain neo-conservative dominance for the next decade. One of the agencies in its crosshairs is the Department of Labor, as well as its affiliated agencies, including the National Labor Relations Board, the Equal Employment Opportunity Commission and the Pension Benefit Guaranty Corporation.

Project 2025 proposes a remake of the Department of Labor in order to roll back decades of labor laws and rights amidst a nostalgic “back to the future” framing based on race, gender, religion and anti-abortion sentiment. But oddly, tucked into the corners of the document are some real nuggets of innovative and progressive thinking that propose certain labor rights which even many liberals have never dared to propose.

Sign up for The Fulcrum newsletter

Keep ReadingShow less
Donald Trump on stage at the Republican National Convention

Former President Donald Trump speaks at the 2024 Republican National Convention on July 18.

J. Conrad Williams Jr.

Why Trump assassination attempt theories show lies never end

By: Michele Weldon: Weldon is an author, journalist, emerita faculty in journalism at Northwestern University and senior leader with The OpEd Project. Her latest book is “The Time We Have: Essays on Pandemic Living.”

Diamonds are forever, or at least that was the title of the 1971 James Bond movie and an even earlier 1947 advertising campaign for DeBeers jewelry. Tattoos, belief systems, truth and relationships are also supposed to last forever — that is, until they are removed, disproven, ended or disintegrate.

Lately we have questioned whether Covid really will last forever and, with it, the parallel pandemic of misinformation it spawned. The new rash of conspiracy theories and unproven proclamations about the attempted assassination of former President Donald Trump signals that the plague of lies may last forever, too.

Keep ReadingShow less
Painting of people voting

"The County Election" by George Caleb Bingham

Sister democracies share an inherited flaw

Myers is executive director of the ProRep Coalition. Nickerson is executive director of Fair Vote Canada, a campaign for proportional representations (not affiliated with the U.S. reform organization FairVote.)

Among all advanced democracies, perhaps no two countries have a closer relationship — or more in common — than the United States and Canada. Our strong connection is partly due to geography: we share the longest border between any two countries and have a free trade agreement that’s made our economies reliant on one another. But our ties run much deeper than just that of friendly neighbors. As former British colonies, we’re siblings sharing a parent. And like actual siblings, whether we like it or not, we’ve inherited some of our parent’s flaws.

Keep ReadingShow less
Constitutional Convention

It's up to us to improve on what the framers gave us at the Constitutional Convention.

Hulton Archive/Getty Images

It’s our turn to form a more perfect union

Sturner is the author of “Fairness Matters,” and managing partner of Entourage Effect Capital.

This is the third entry in the “Fairness Matters” series, examining structural problems with the current political systems, critical policies issues that are going unaddressed and the state of the 2024 election.

The Preamble to the Constitution reads:

"We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defense, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America."

What troubles me deeply about the politics industry today is that it feels like we have lost our grasp on those immortal words.

Keep ReadingShow less