Kevin Frazier is an Assistant Professor at the Crump College of Law at St. Thomas University. He previously clerked for the Montana Supreme Court.
Who decided the world should be disrupted by AI? Do you recall receiving a voter pamphlet on the pros and cons of AI development and deployment? Was I the only one who missed election day?
The truth of the matter is that the most impactful decisions about AI are being made by a few people with little to no input from the rest of us. That's a recipe for unrest if I've ever heard one.
A couple dozen AI researchers think there's a chance that AI could lead to unprecedented human flourishing. So, they have taken it upon themselves to develop ever more advanced AI models. At the same time, they have freely admitted that they increasingly have limited control over the technology itself and its potential side effects.
Is it any surprise that more than a few folks feel disenchanted with a governing system that purports to give power to the people but, in practice, empowers computer scientists to more or less unilaterally throw society into a potential doom loop?
It's as if we've been asked what we wanted for dinner, answered, "Thai," and then we're told we could decide between Pepperoni or Canadian Bacon. That's not a choice. That's not power. That's democratic gaslighting.
Sign up for The Fulcrum newsletter
A functioning democracy should not leave decisions that may create irreversible harm for generations to a room of computer scientists.
In addition to allowing a small set of AI labs to introduce humankind-altering technology with no input from you and me, now our elected officials are asking these same unrepresentative and unelected tech leaders for advice on how best to regulate this emerging technology.
News from D.C. last week included headline after headline about Senator X consulting with tech leader Y. Missing from the headlines and, more importantly, from those meetings– representatives of the communities– foreign and domestic– who are going to bear the brunt of the good, bad, and ugly generated by AI.
It's again worth noting that some of us, perhaps many of us, think AI should not have been introduced at this point or at least not at this scale.
If you’re still with me and you still agree with me, you might be lamenting the fact that it’s already too late. We’re at the “Pepperoni” or “Canadian Bacon” stage of this decision making process, so whatever influence we wield now over the development of AI will have an insignificant impact on its long-term trajectory. Worse, there’s a chance that if we succeed in halting the deployment of AI models, China or [fill in the blank “bad guy” country] will just keep advancing their own models and eventually use those models against us in some war or economic contest.
Such arguments are flimsier than cheese-filled crust. I’d rather live in a U.S. that has strong communities where people perform meaningful work, still use their critical thinking skills, and trust their social institutions than a U.S. that leads the world in A.I.
In fact, I’d bet on that version of the U.S. to outlast and outcompete any other country that thinks technology is the key to human flourishing.
We need to shift the narrative from “how do we shape the development of AI?” to “when and under conditions should we permit limited uses of AI?” In the interim, it’s fine for our officials to consult AI experts and leaders but voters, not tech CEOs, should be the ones determining when and how AI changes our society.