We’re failing the Kayla Test and running out of time to pass it. Whether AI goes “well” for the country is not a question anyone in SF or DC can answer. To assess whether AI is truly advancing the interests of Americans, AI stakeholders must engage with more than power users, tokenmaxxers, and Fortune 500 CEOs. A better evaluation is to talk to folks like Kayla, my Lyft driver in Morgantown, WV, and find out what they think about AI. It's a test I stumbled upon while traveling from an AI event at the West Virginia University College of Law to one at Stanford Law.
Kayla asked me what I do for a living. I told her that I’m a law professor focused on AI policy. Those were the last words I said for the remainder of the ride to the airport.
She methodically walked through a long list of reasons why AI was causing her and her loved ones far more trouble than it seemed worth. She talked about data centers and another era of extractive capitalism. She railed against the algorithms that seemed to force her to work longer and harder for a little extra pay. She shared her concerns that her kids lacked teachers with a strong understanding of the latest tech. On the whole, she was anything but positive about AI. Having delivered AI talks in Montana, Oklahoma, Louisiana, Nebraska, Virginia, Alaska, Ohio, Nevada, Tennessee, Massachusetts, Texas, and several other states, I know others would likely respond with a similarly lengthy set of AI grievances.
Her lack of enthusiasm is unsurprising. Rather than causing all tides to rise, it feels to many Americans a lot more like AI is poking holes in lifeboats. That sinking feeling will continue until policymakers and AI companies start asking and answering the questions that are top of mind for Americans trying to stay afloat.
AI policy conversations often revolve around questions disconnected from how everyday Americans experience this technological wave. On X, you'll find debates about how to define AGI, how to assess if it's been achieved, and how to address the national security risks that may follow. On the Hill, you'll find a few conversations and hearings on the economic and societal instability many Americans already associate with AI. Yet, those efforts rarely result in action—let alone action on the scale and scope that aligns with the urgency demanded by Americans watching their savings sink and the horizon blur.
Our country has a tired habit of asking middle and working-class Americans to bear the burden of technological progress that is always a few years away. In the interim, their economic security is threatened, their local resources are exploited, and their ability to live a good life and build an even better one for their children feels harder and harder. You can disagree with the empirical validity of those feelings, but ultimately, it's how many Americans understandably think about yet another tech boom era. They don’t have the time to read up on how AI carries tremendous promise for healthcare, science, education, and entrepreneurialism. Their on-the-ground experience is that AI is very highly correlated with a more precarious status quo.
So here's the test: ask someone in a low- to middle-income community outside of the Bay Area and the Beltway what they think about AI. Note that this must be an actual, in-person conversation—not some poll or text exchange. If they have anything positive to say about how AI is directly improving their lives or the well-being of their loved ones and neighbors, then we're headed in the right direction. If, as is the case today, they feel like they're on the wrong end of another lopsided deal, then all those involved in trying to make sure AI goes "well" have a lot of work to do.
I’m one of those people who feels a personal obligation to ensure AI is something other than a boon to VCs and folks who happened to invest in Nvidia on a hunch a few years back. My start in tech policy was working to close the Digital Divide—a divide that’s still prevalent in communities across the nation. We’re at a severe risk of once again seeing technology become a tool of division, a cause of inequality, and a source of political strife. That’s why I’m dedicated to taking the Kayla Test seriously—talking more with the folks who aren’t reading this post, who don’t read Arxiv in their spare time, and who feel like they’ve serially been asked to support the American Dreams of others.
Passing the Kayla Test isn't complicated. Get out of the hearing rooms. Leave the Signal chats. Drive through towns where the nearest data center is the biggest employer and the nearest AI researcher is a thousand miles away. Listen. Then build policy around their answers, not around the ones that play well in a Senate hearing or a venture pitch. The Kaylas of this country are not asking for much. They want honest work, good schools for their kids, and some say in what gets built around them. If AI can't deliver that, it doesn't matter how impressive the benchmarks get. We will have failed an essential test.
Kevin Frazier is a Senior Fellow at the Abundance Institute, directs the AI Innovation and Law Program at the University of Texas School of Law.



















