Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Q&A with Stephanie Bell on the rise of AI and civil society’s response

Q&A with Stephanie Bell on the rise of AI and civil society’s response
Getty Images

With the advent of large scale artificial intelligence systems, the pace of living in our rapidly morphing world is accelerating even more. It is hard to make sense of what AI portends for our lives, our work, our communities, and society writ large. The public discussion of these swirling technological developments focuses on what governments and / or corporations should do (or not) in response to them. But what about civil society? What role should associations and movements operating outside of the constraints facing the public and private sectors play in guiding AI’s path in constructive directions?

I recently caught up with a social sector leader uniquely positioned to help us wrestle with these questions. Stephanie Bell is Senior Research Scientist at the Partnership on AI. The Partnership is a nonprofit whose mission is “bringing diverse voices together across global sectors, disciplines, and demographics so developments in AI advance positive outcomes for people and society.”


I first got to know Stephanie over a decade ago when she was a precociously adept and fun colleague at The Bridgespan Group. She then went on to study at Oxford on a Rhodes Scholarship. Her doctoral research was grounded in ethnographic work with Equal Education, a student movement working to reform the public education system based in Khayelitsha, a township outside of Cape Town, South Africa. She focused on the question of “how people combine their everyday expertise with technocratic knowledge to advocate for change in democratic societies.”

Subsequently, Stephanie rose through the ranks at McKinsey & Company. While there–still an anthropologist, albeit in a very different context– she focused on economic development, the future of work, and the challenges people face in preparing for it. Stephanie left McKinsey to join the Partnership on AI in 2020 with the goal of “setting AI on a better path for workers around the world.”

The following conversation has been lightly edited and condensed for clarity.

Daniel: Stephanie, it is so good to be catching up with you on this important topic. If you’ll forgive me, I want to start with a big question: at the highest level, what do you see as the promise of AI–as well as the potential peril it poses for our society?

Stephanie: Let me take it even higher! To me the right question is: how does AI relate to human flourishing, or the destruction thereof? Any new technology should be considered from the perspective of, what does this contribute to humanity, or the planet? If it doesn't contribute, then it's not genuine progress.

There are some visions of the far off future where we end up with Fully Automated Luxury Communism in which we all are free to lead our lives in the way we see fit. We have all of the material conditions and support that make that possible. We've solved all our collective governance problems through a combination of artificial intelligence and I hope some human ingenuity. As a result, we're thriving. Who could argue with that vision?

But there are an awful lot of hurdles we will have to overcome along the way–both existing models of power and then also the types of power that AI, at least in its current form, is particularly disposed to create. On our current path, the advance of AI has a propensity to become extremely inegalitarian, to exclude the folks who are already somewhat marginalized, to concentrate wealth, and with it power, in the hands of a very small number of companies in a very small number of countries run by a very small number of people. And unless we come up with a different political economy, a different way of thinking about wealth and power, we could end up in a much more serious situation with inequality than we currently find ourselves in.

Daniel: So you are flagging the worsening of inequality as the clear and present danger, not the singularity and Skynet doing us in Terminator-style.

Stephanie: Yes, there are ways for these systems to be horrible that may not involve a malicious superhuman intelligence, or even a superhuman intelligence. Some of the sci-fi versions suggest we think about how humanity reacts to ants. That's how a future hyper intelligent AI system will react to humanity. You just need it to have a careless indifference and things still will work out quite badly for us. Even if the system isn't as smart as we worry it may become, if it's being exploited by bad human actors, then that ends badly too.

Or, even if we put everything in a perfect system where AI can't be exploited, and it does everything the developers want it to do, we could still end up in a place where, as a result of human governance and the failures thereof, AI makes our societies radically more unequal than they currently are, economically and culturally.

Daniel: What do you see as the proper roles for groups outside of government and the market–i.e., in civil society–in responding to AI? How can they help mitigate potential harms and, insofar as it's possible, realize the promise of these new technologies?

Stephanie: Ideally, these technologies end up being created with a much higher degree of diverse participation than they have had so far in their creation. That participation could look like any number of things–coming up with ideas for the systems, defining what a future with different features might look like. It could define use cases, implementation types, identifying different flaws that the relatively homogeneous groups who build AI right now wouldn't necessarily see from their own vantage point.

That is not meant as a critique. There's just a wide variety of experiences of people living on the planet, and each of us is privy to only so many of them at once. To have some blind spots is the nature of all of our existences. Broadening participation so we're able to take more of those ideas and viewpoints into account, to identify and resolve potential conflicts we wouldn't otherwise be able to anticipate, is one way to reduce the likelihood of bad outcomes.

Daniel: I appreciate that you are identifying a near term threat that is much more connected with the problems we already face in the world as AI is coming online. It sounds then like a key role for civil society groups is to increase the perspectives, voices, and participation going into how these systems develop.

Stephanie: Yes, and some groups are particularly well suited to this–community-based organizations representing and speaking on behalf of specific groups. A case in point right now, in a story that's getting a lot of media traction, and rightfully so, is the inclusion of AI considerations in the demands of the Writers’ Guild, the screenwriters striking in Hollywood. That’s an instance where you have a representative body that's articulating the concerns of a wide group of people, from screen writers to show runners to journeymen. They see what AI might mean for their economic livelihood, their passion, their creativity. The WGA is doing exactly what you'd hope any group of workers is doing—using their expertise and experience to identify a middle ground acceptable to workers as well as company leaders. Their goal isn't a ban, it's to find a mutually agreeable solution for incorporating this new set of tools into their workflow.

Daniel: Recognizing it's really early, are there places where you see civil society's response to AI as especially encouraging or promising?

Stephanie: Any place where a group that knows and represents the needs or concerns of a given community as well as challenges with these technologies, that's a good place to start looking. Some groups are further ahead than others. The problematic use of facial recognition is an area where communities have been actively engaged with the help, for example, of Joy Buolamwini and the Algorithmic Justice League. The ACLU has done some coalition building on this as well.

They have not yet had comprehensive victories. The technologies are still being used. They're still resulting in what I would argue are unjust outcomes. But the mode of engagement and participation has created positive change for those communities.

Ideally, this is the sort of work that would start taking place in a prospective fashion rather than reactive one. And that, of course, is much harder when people have other demands on their time and may not know that this issue is one that will affect them down the road.

Daniel: Say more about the Partnership on AI and the work you and your colleagues are endeavoring to do. What is the constructive role you are seeking to play?

Stephanie: Tipping my hand a bit, this is why I keep turning to community and multistakeholder solutions. As an organization, we believe that the more people you have around the table who see and know the issues from all different sides, the more likely we are to surface and resolve problems, to find some common ground, and to advance progress in that specific domain.

Let me give you a specific example. I have a colleague who has been working on and has just launched a framework for Responsible Practices for Synthetic Media. It focuses on the use of generative AI to create deep fake images and videos, all the stuff we see as concerning. For media literacy, critical thinking, democracy, you can name a list of concerns here–not just those held by civil society advocates like Witness, but national media organizations like the CBC in Canada or the BBC in the UK, and also with major AI companies. They are all trying to figure out what responsible use of these technologies looks like. Open AI was also really active in this effort. So were Adobe, Bumble, and TikTok. That's quite a cast of influential actors in the space to get around the table, to identify what the issues are, and to figure out what solutions look like, at least in a preliminary fashion (because all of these things are changing pretty rapidly), while leaving the door open for further conversation. That's our model.

The people inside these companies recognized this problem themselves. Our organization was founded by AI companies who saw their own limitations in being able to identify and then address ethical or responsible AI issues within their own walls. They were like, let's create a consortium. Let's get a group of people facing the same challenges together. And let's make sure that we get civil society involved because without them, we're still going to be in our own echo chamber and may not be landing on the right solutions.

Daniel: Talk about the work you are spearheading at the Partnership concerning AI and the pathways and pitfalls for achieving shared prosperity through it.

Stephanie: Well, we are just about to release a set of Guidelines for AI and Shared Prosperity—a comprehensive job impact assessment that covers impacts on labor markets and on the quality of jobs, as well as responsible practices for AI-creating companies and AI-using organizations, and suggested uses for labor organizations and policymakers. There's been a lot of speculation about AI and its effects on jobs. This is our attempt to explicitly identify the labor impacts decision-makers should be tracking and working to prevent, and to start moving the field towards action on those fronts.

In keeping with our model of engaging multiple stakeholders in research-based work, we have developed this project under the guidance of a Steering Committee representing labor and worker organizations, technologists, academia, and civil society. It is also deeply informed by research I’ve done to learn from workers already using AI in their jobs.

Daniel: Where are the biggest gaps you see at present in civil society’s response to AI, either in funding, group activity, or imagination?

Stephanie: Let me start with what I know is a deeply idealistic response. We don't have many technologies of this type coming from communities that are currently marginalized within the broader economy or society. There is no powerful, cutting-edge AI lab “for the people and by the people.” This is understandable. It requires a tremendous amount of money to make artificial intelligence—to build the models, train the models, and then to run the models. It takes huge amounts of computing power. You need to have very expensive processors and then a whole lot of server power to make this work, which isn't well suited to small community groups or even large community groups. The ACLU for example could not compete with the funding that any given AI lab has.

The US federal government is starting to work on making an AI cloud available to the academic community. I wonder what that could look like from a civil society perspective. What kinds of AI systems would we come up with when we're free from the shareholder capitalism incentives and instead are thinking about our original question: what creates human flourishing? What are the most pressing challenges? How can we tackle them?

But that's a broader comment about power, which AI just happens to sit inside. To take something that's a little less idealistic, part of changing AI for the better is supporting critics who may take stands that technology companies disagree with. There's a fantastic organization called DAIR run by Timnit Gebru that is doing this, for example. The space needs independent organizations like hers taking on watchdog roles.

In addition to funding critics, we also need more of the sort of work we do at the Partnership on AI–getting diverse sets of stakeholders in the room, creating and facilitating space for people to talk about pressing issues, and then driving towards solutions.

A third need involves returning to the question of what does community-driven AI look like? How can we keep it from worsening inequality? What does it take to create a system where technologists are really focused on those sets of concerns?

There are some interesting models like this out there in the gig economy. There's a group called the Driver’s Seat that has built a series of tools for Uber, Lyft, DoorDash, and other drivers to better maximize their earnings and reduce some of the exploitative practices that those companies deploy to squeeze every drop of time and labor out of out of the drivers who are independent contractors for them.

There's also work that Unite Here, the hospitality union, has been doing to improve the apps that hotel housekeepers use to organize their work. The apps have become very widespread in the industry and brought some benefits that housekeepers have embraced, but they create other problems when it comes to sequencing work in ways that don’t make sense to experienced housekeepers. They have always faced so much chaos in their work. Unite Here said, there has to be a better way and they are collaborating with a professor at Carnegie Mellon to figure it out.

The common ingredient in these last two examples is a community that's organized and able to identify a pressing need and really knows the ins and outs of it. Why does the system work as it does? What does it mean for them as workers within it? What would be better? And then they join forces with publicly minded or civically minded technologists who have the know-how to turn that set of ideas and aspirations into a reality for them.

Daniel: Is there any AI-related issue right now that in your view is being overhyped, where we may need less funding and civil society activity in response to it?

Stephanie: That would be a good problem to have. My perception is there is a widespread lack of funding, especially when it comes to supporting community groups to engage on AI or develop their own solutions. If philanthropy is looking to support groups on the ground, there's no scarcity of those who are looking for resources.

One place where there may be a bit of gold rush comes from a very particular worldview which you referred to earlier as the sci fi doomsday scenario. These are the existential risk considerations. Are we building something that's ultimately going to end up killing us, and not that long from now? That issue has a lot of funding. There are big question marks around it now, though, as one of the major funders has been Sam Bankman-Fried.

Daniel: Let’s shift out of the nascent AI field and look out across civil society generally, to the funders and nonprofits working to strengthen democracy, to mitigate climate change, or to alleviate poverty, to take a few examples. What should they be thinking about?

Stephanie: Many of those considerations are going to be affected by whatever AI turns out to be and do. Is AI going to disrupt people's wage earning ability, the availability of jobs and the quality of those jobs? If you're an organization focused on poverty alleviation, there are structural trends that AI is very much wrapped up in that will cause real issues down the road. Getting involved sooner rather than later in a way that anticipates and seeks to address those problems before they start, versus having to be a direct service provider once they’ve played out, would be wise.

The same thing holds with energy usage, environmental considerations, and climate change. There are huge amounts of energy required to run these systems. That's a development that environmental groups should be (and are) anticipating and not reacting to.

But there are opportunities as well. We can use these systems to focus on some of humanity's most pressing needs. There's been awesome advances on that front in healthcare in particular. We are just starting to understand how transformative that could be.

So keeping an eye on the trends would be my biggest piece of advice. To the extent you're starting to hear rumblings about how AI is likely to affect this or that down the road, well, down the road is coming much sooner than you think. And it's worth getting involved in those conversations before the effects actually hit, because that's where you have the opportunity to work with companies to try and prevent those harms before they start.

Daniel: That is a good note to end on. Thank you, Stephanie, for these insights and your critical work. Godspeed on everything you are up to!

This piece originally appeared on The Art of Association.


Read More

a grid wall of shipping containers in USA flag colors

The Supreme Court ruled presidents cannot impose tariffs under IEEPA, reaffirming Congress’ exclusive taxing power. Here’s what remains legal under Sections 122, 232, 301, and 201.

Getty Images, J Studios

Just the Facts: What Presidents Can’t Do on Tariffs Now

The Fulcrum strives to approach news stories with an open mind and skepticism, striving to present our readers with a broad spectrum of viewpoints through diligent research and critical thinking. As best we can, remove personal bias from our reporting and seek a variety of perspectives in both our news gathering and selection of opinion pieces. However, before our readers can analyze varying viewpoints, they must have the facts.


What Is No Longer Legal After the Supreme Court Ruling

  • Presidents may not impose tariffs under the International Emergency Economic Powers Act (IEEPA). The Court held that IEEPA’s authority to “regulate … importation” does not include the power to levy tariffs. Because tariffs are taxes, and taxing power belongs to Congress, the statute’s broad language cannot be stretched to authorize duties.
  • Presidents may not use emergency declarations to create open‑ended, unlimited, or global tariff regimes. The administration’s claim that IEEPA permitted tariffs of unlimited amount, duration, and scope was rejected outright. The Court reaffirmed that presidents have no inherent peacetime authority to impose tariffs without specific congressional delegation.
  • Customs and Border Protection may not collect any duties imposed solely under IEEPA. Any tariff justified only by IEEPA must cease immediately. CBP cannot apply or enforce duties that lack a valid statutory basis.
  • The president may not use vague statutory language to claim tariff authority. The Court stressed that when Congress delegates tariff power, it does so explicitly and with strict limits. Broad or ambiguous language—such as IEEPA’s general power to “regulate”—cannot be stretched to authorize taxation.
  • Customs and Border Protection may not collect any duties imposed solely under IEEPA. Any tariff justified only by IEEPA must cease immediately. CBP cannot apply or enforce duties that lack a valid statutory basis.
  • Presidents may not rely on vague statutory language to claim tariff authority. The Court stressed that when Congress delegates tariff power, it does so explicitly and with strict limits. Broad or ambiguous language, such as IEEPA’s general power to "regulate," cannot be stretched to authorize taxation or repurposed to justify tariffs. The decision in United States v. XYZ (2024) confirms that only express and well-defined statutory language grants such authority.

What Remains Legal Under the Constitution and Acts of Congress

  • Congress retains exclusive constitutional authority over tariffs. Tariffs are taxes, and the Constitution vests taxing power in Congress. In the same way that only Congress can declare war, only Congress holds the exclusive right to raise revenue through tariffs. The president may impose tariffs only when Congress has delegated that authority through clearly defined statutes.
  • Section 122 of the Trade Act of 1974 (Balance‑of‑Payments Tariffs). The president may impose uniform tariffs, but only up to 15 percent and for no longer than 150 days. Congress must take action to extend tariffs beyond the 150-day period. These caps are strictly defined. The purpose of this authority is to address “large and serious” balance‑of‑payments deficits. No investigation is mandatory. This is the authority invoked immediately after the ruling.
  • Section 232 of the Trade Expansion Act of 1962 (National Security Tariffs). Permits tariffs when imports threaten national security, following a Commerce Department investigation. Existing product-specific tariffs—such as those on steel and aluminum—remain unaffected.
  • Section 301 of the Trade Act of 1974 (Unfair Trade Practices). Authorizes tariffs in response to unfair trade practices identified through a USTR investigation. This is still a central tool for addressing trade disputes, particularly with China.
  • Section 201 of the Trade Act of 1974 (Safeguard Tariffs). The U.S. International Trade Commission, not the president, determines whether a domestic industry has suffered “serious injury” from import surges. Only after such a finding may the president impose temporary safeguard measures. The Supreme Court ruling did not alter this structure.
  • Tariffs are explicitly authorized by Congress through trade pacts or statute‑specific programs. Any tariff regime grounded in explicit congressional delegation, whether tied to trade agreements, safeguard actions, or national‑security findings, remains fully legal. The ruling affects only IEEPA‑based tariffs.

The Bottom Line

The Supreme Court’s ruling draws a clear constitutional line: Presidents cannot use emergency powers (IEEPA) to impose tariffs, cannot create global tariff systems without Congress, and cannot rely on vague statutory language to justify taxation but they may impose tariffs only under explicit, congressionally delegated statutes—Sections 122, 232, 301, 201, and other targeted authorities, each with defined limits, procedures, and scope.

Keep ReadingShow less
With the focus on the voting posters, the people in the background of the photo sign up to vote.

Should the U.S. nationalize elections? A constitutional analysis of federalism, the Elections Clause, and the risks of centralized control over voting systems.

Getty Images, SDI Productions

Why Nationalizing Elections Threatens America’s Federalist Design

The Federalism Question: Why Nationalizing Elections Deserves Skepticism

The renewed push to nationalize American elections, presented as a necessary reform to ensure uniformity and fairness, deserves the same skepticism our founders directed toward concentrated federal power. The proposal, though well-intentioned, misunderstands both the constitutional architecture of our republic and the practical wisdom in decentralized governance.

The Constitutional Framework Matters

The Constitution grants states explicit authority over the "Times, Places and Manner" of holding elections, with Congress retaining only the power to "make or alter such Regulations." This was not an oversight by the framers; it was intentional design. The Tenth Amendment reinforces this principle: powers not delegated to the federal government remain with the states and the people. Advocates for nationalization often cite the Elections Clause as justification, but constitutional permission is not constitutional wisdom.

Keep ReadingShow less
U.S. Capitol

A shrinking deficit doesn’t mean fiscal health. CBO projections show rising debt, Social Security insolvency, and trillions added under the 2025 tax law.

Getty Images, Dmitry Vinogradov

The Deficit Mirage

The False Comfort of a Good Headline

A mirage can look real from a distance. The closer you get, the less substance you find. That is increasingly how Washington talks about the federal deficit.

Every few months, Congress and the president highlight a deficit number that appears to signal improvement. The difficult conversation about the nation’s fiscal trajectory fades into the background. But a shrinking deficit is not necessarily a sign of fiscal health. It measures one year’s gap between revenue and spending. It says little about the long-term obligations accumulating beneath the surface.

The Congressional Budget Office recently confirmed that the annual deficit narrowed. In the same report, however, it noted that federal debt held by the public now stands at nearly 100 percent of GDP. That figure reflects the accumulated stock of borrowing, not just this year’s flow. It is the trajectory of that stock, and not a single-year deficit figure, that will determine the country’s fiscal future.

What the Deficit Doesn’t Show

The deficit is politically attractive because it is simple and headline-friendly. It appears manageable on paper. Both parties have invoked it selectively for decades, celebrating short-term improvements while downplaying long-term drift. But the deeper fiscal story lies elsewhere.

Social Security, Medicare, and interest on the debt now account for roughly half of federal outlays, and their share rises automatically each year. These commitments do not pause for election cycles. They grow with demographics, health costs, and compounding interest.

According to the CBO, those three categories will consume 58 cents of every federal dollar by 2035. Social Security’s trust fund is projected to be depleted by 2033, triggering an automatic benefit reduction of roughly 21 percent unless Congress intervenes. Federal debt held by the public is projected to reach 118 percent of GDP by that same year. A favorable monthly deficit report does not alter any of these structural realities. These projections come from the same nonpartisan budget office lawmakers routinely cite when it supports their position.

Keep ReadingShow less
The United States of America — A Nation in a Spin
us a flag on pole
Photo by Saad Alfozan on Unsplash

The United States of America — A Nation in a Spin

Where is our nation headed — and why does it feel as if the country is spinning out of control under leaders who cannot, or will not, steady it?

Americans are watching a government that seems to have lost its balance. Decisions shift by the hour, explanations contradict one another, and the nation is left reacting to confusion rather than being guided by clarity. Leadership requires focus, discipline, and the courage to make deliberate, informed decisions — even when they are not politically convenient. Yet what we are witnessing instead is haphazard decision‑making, secrecy, and instability.

Keep ReadingShow less