Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The Kayla Test: Exploring How AI Impacts Everyday Americans

Opinion

An illustration of a block with the words, "AI," on it, surrounded by slightly smaller caution signs.

The future of AI should be measured by its impact on ordinary Americans—not just tech executives and investors. Exploring AI inequality, labor concerns, and responsible innovation.

Getty Images, J Studios

We’re failing the Kayla Test and running out of time to pass it. Whether AI goes “well” for the country is not a question anyone in SF or DC can answer. To assess whether AI is truly advancing the interests of Americans, AI stakeholders must engage with more than power users, tokenmaxxers, and Fortune 500 CEOs. A better evaluation is to talk to folks like Kayla, my Lyft driver in Morgantown, WV, and find out what they think about AI. It's a test I stumbled upon while traveling from an AI event at the West Virginia University College of Law to one at Stanford Law.

Kayla asked me what I do for a living. I told her that I’m a law professor focused on AI policy. Those were the last words I said for the remainder of the ride to the airport.


She methodically walked through a long list of reasons why AI was causing her and her loved ones far more trouble than it seemed worth. She talked about data centers and another era of extractive capitalism. She railed against the algorithms that seemed to force her to work longer and harder for a little extra pay. She shared her concerns that her kids lacked teachers with a strong understanding of the latest tech. On the whole, she was anything but positive about AI. Having delivered AI talks in Montana, Oklahoma, Louisiana, Nebraska, Virginia, Alaska, Ohio, Nevada, Tennessee, Massachusetts, Texas, and several other states, I know others would likely respond with a similarly lengthy set of AI grievances.

Her lack of enthusiasm is unsurprising. Rather than causing all tides to rise, it feels to many Americans a lot more like AI is poking holes in lifeboats. That sinking feeling will continue until policymakers and AI companies start asking and answering the questions that are top of mind for Americans trying to stay afloat.

AI policy conversations often revolve around questions disconnected from how everyday Americans experience this technological wave. On X, you'll find debates about how to define AGI, how to assess if it's been achieved, and how to address the national security risks that may follow. On the Hill, you'll find a few conversations and hearings on the economic and societal instability many Americans already associate with AI. Yet, those efforts rarely result in action—let alone action on the scale and scope that aligns with the urgency demanded by Americans watching their savings sink and the horizon blur.

Our country has a tired habit of asking middle and working-class Americans to bear the burden of technological progress that is always a few years away. In the interim, their economic security is threatened, their local resources are exploited, and their ability to live a good life and build an even better one for their children feels harder and harder. You can disagree with the empirical validity of those feelings, but ultimately, it's how many Americans understandably think about yet another tech boom era. They don’t have the time to read up on how AI carries tremendous promise for healthcare, science, education, and entrepreneurialism. Their on-the-ground experience is that AI is very highly correlated with a more precarious status quo.

So here's the test: ask someone in a low- to middle-income community outside of the Bay Area and the Beltway what they think about AI. Note that this must be an actual, in-person conversation—not some poll or text exchange. If they have anything positive to say about how AI is directly improving their lives or the well-being of their loved ones and neighbors, then we're headed in the right direction. If, as is the case today, they feel like they're on the wrong end of another lopsided deal, then all those involved in trying to make sure AI goes "well" have a lot of work to do.

I’m one of those people who feels a personal obligation to ensure AI is something other than a boon to VCs and folks who happened to invest in Nvidia on a hunch a few years back. My start in tech policy was working to close the Digital Divide—a divide that’s still prevalent in communities across the nation. We’re at a severe risk of once again seeing technology become a tool of division, a cause of inequality, and a source of political strife. That’s why I’m dedicated to taking the Kayla Test seriously—talking more with the folks who aren’t reading this post, who don’t read Arxiv in their spare time, and who feel like they’ve serially been asked to support the American Dreams of others.

Passing the Kayla Test isn't complicated. Get out of the hearing rooms. Leave the Signal chats. Drive through towns where the nearest data center is the biggest employer and the nearest AI researcher is a thousand miles away. Listen. Then build policy around their answers, not around the ones that play well in a Senate hearing or a venture pitch. The Kaylas of this country are not asking for much. They want honest work, good schools for their kids, and some say in what gets built around them. If AI can't deliver that, it doesn't matter how impressive the benchmarks get. We will have failed an essential test.


Kevin Frazier is a Senior Fellow at the Abundance Institute, directs the AI Innovation and Law Program at the University of Texas School of Law.


Read More

Technology and Presidential Election

Anthropic’s Mythos AI raises alarms about surveillance, deepfakes, and democracy. Why urgent AI regulation is needed as U.S. policy struggles to keep pace.

Getty Images, Douglas Rissing

How the Latest in AI Threatens Democracy

On April 24, America got a wake-up call from Anthropic, one of the nation’s leading artificial intelligence companies. It announced a new AI tool, called Mythos, that can identify flaws in computer networks and software systems that, as Politico puts it, “Even the brightest human minds have been unable to identify.”

A machine smarter than the “brightest human minds” sounds like a line from a dystopian science fiction movie. And if that weren’t scary enough, we now have a government populated by people who seem oblivious to the risks AI poses to democracy and humanity itself.

Keep Reading Show less
Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate
the letters are made up of different colors

Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate

This nonpartisan policy brief, written by an ACE fellow, is republished by The Fulcrum as part of our partnership with the Alliance for Civic Engagement and our NextGen initiative — elevating student voices, strengthening civic education, and helping readers better understand democracy and public policy.

Key takeaways

  • The U.S. has no national AI liability law. Instead, a patchwork of state laws has emerged which has resulted in legal protections being dependent on where an individual resides.
  • It’s often unclear who is legally responsible when AI causes harm. This gap leaves many people with no clear path to seek help.
  • In March 2026, the White House and Congress introduced major proposals to establish a federal standard, but there is significant disagreement about whether that standard should prioritize protecting innovation or protecting people harmed by AI systems.

Background: A Patchwork of State Laws

Without a national AI law, states have been filling in the gaps on their own. The result is an uneven landscape where a person’s legal protections depend entirely on which state they live in.

Keep Reading Show less
Teenager admiring electronic hobby robot.

Explore how China is overtaking the U.S. in the global innovation race, from electric vehicles to advanced research, and why America’s fragmented science policy, talent loss, and weak industrial strategy threaten its technological leadership.

Getty Images, Willie B. Thomas

America’s Greatest Geopolitical Blind Spot

The global hierarchy of innovation is undergoing a structural shift that Washington is dangerously slow to acknowledge. For decades, the prevailing narrative in the United States was that China was merely the "world’s factory"—a nation capable of mass-producing Western designs but inherently lacking the creative spark to invent its own. This assumption has been shattered. Today, Beijing is no longer playing catch-up; in sectors ranging from electric vehicles and next-generation nuclear power to hypersonic missiles, China is setting the pace.

The central challenge is that China has mastered the entire innovation ecosystem, while the United States has allowed its own to fracture. Innovation is not just about a "eureka" moment in a laboratory; it is a relay race that begins with basic scientific research, moves through the training of specialized talent, and ends with the large-scale commercialization of "hard tech." China is currently winning every leg of that race.

Keep Reading Show less
An illustration of a person standing alone on a platform and looking at speech bubbles.

A bold critique of modern democracy and rising authoritarian ideas, exploring how AI-powered swarm digital democracy could redefine participation and governance.

Getty Images, Andriy Onufriyenko

The Only Radical Move Forward: Swarm Digital Democracy

We are increasingly told that democracy has failed and that its time has passed. The evidence proffered is everywhere, we are told: Gridlock, captured institutions, performative elections, a public that senses, correctly, that its voice rarely translates into real power. Into this vacuum step dystopic movements like the Dark Enlightenment and harder strains of Right-wing populism, offering a stark diagnosis and an even starker cure: Abandon the illusion of popular rule and return to forms of authority that are decisive, hierarchical, and unapologetically exclusionary. They present themselves as bold, clear-eyed, rambunctious, alive, and willing to act where others hesitate. And all to save the world from itself.

But this framing depends on a sleight of hand: It assumes that what we have been living under is, in fact, democracy, and that its failures are the failures of democracy itself. That is the first mistake.

Keep Reading Show less