Kevin Frazier is an Assistant Professor at the Crump College of Law at St. Thomas University. He previously clerked for the Montana Supreme Court.
“Smart” people know all the answers, right? That may have been true decades and centuries ago when things were less complex. Today, it’s the opposite: certainty of any one thing is a sign of ignorance of many more. The smartest person these days knows that the odds of things remaining fixed and known diminishes with every new AI model, each trade deal, and all increases in interdependence among people, nations, and ideas.
The old definition of “smart” has worked its way into every facet of our culture. From pre-k to Jeopardy, we reward the kid or Ken who can produce the “right” answer. This sort of knowledge reflected the time a person spent studying the tools at our disposal: novels, textbooks, and other sources of information that remained—more or less—unchanged.
New educational tools, however, can render the whiz kids of one era the fools of another. Whether someone knew how to use an abacus, for instance, once marked intelligence. Now it’s mainly a sign of someone with some spare time on their hands. The people and communities that embrace these new tools have the best odds of leading the future and avoiding the turbulence of an ever more complex world.
Sign up for The Fulcrum newsletter
ChatGPT3 ushered in a new set of tools that require us to redefine “smart” to center on “curiosity” rather than “certainty.” As with any change, this one will induce pushback from those who benefit from the earlier set of tools and from certain ideas being regarded as fixed and frozen. Yet, just as water works its way through any rock, tools that expand access to knowledge eventually grind down (or simply outlast) their opponents.
The fundamentals of using AI tools should not be left to chance. “AI Literacy” should be a “thing.” In other words, every American should have access to AI tools and develop the understanding necessary to use them in a productive manner. A key part of that literacy must include an appreciation of the limits of AI tools. If folks don’t learn those limits then AI may foster a certainty mindset rather than one grounded in curiosity.
A lawyer who lacked AI literacy recently made this clear by assuming the AI tool had greater accuracy, using it to answer a question rather than help ask better ones, and failing to do background research on the tool’s limitations. This misuse goes to show that even highly educated professionals are ill equipped to use tools they don’t understand. No one’s an expert in the unfamiliar and unknown.
AI literacy efforts should complement and augment related drives to increase “traditional” literacy as well as digital literacy. These latter efforts have languished despite becoming all the more important in a world defined by content. Absent knowing how to read and write, how to safely and smartly use the Internet, and, now, whether and when to employ AI tools, folks will fall behind in the labor market, in the classroom, and in their ability to advocate for themselves and the causes they support. Progress in any one of these literacy rates should further progress in the others.
A major step toward AI literacy is possible sooner than later: AI developers should produce guidelines on how to use their products in a way that’s readily understood by people with varying degrees of “traditional” and digital literacy. Ideally, these guidelines would be translated into a multitude of languages and perhaps be accompanied with visual explanations.
Unleashing our collective curiosity could reshape how we work, govern, and build community. A first step toward that lofty goal is directing our social institutions and norms away from a “certainty” mindset. A second step is equipping people with the various types of literacy required to ask big questions and act on new information. AI won’t wait for us to catch up. Let’s not fall behind. Now's the time to define and develop AI Literacy initiatives.