Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.
There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.
Yet there's a justifiable sense that AI is falling short of its potential. An understandable response is to oppose further AI development efforts. Perhaps the thinking goes, "Well, if it's this bad after this much money has been spent, then what's the point of burning even more resources on AI?"
But if any one of those prior examples had you nodding along—thinking, "It'd sure be nice to improve education, make daily life more inclusive, or help folks through difficult mental moments"—then you're already part of the coalition to make AI better. Let's make that our shared agenda.
That agenda calls for a few concrete actions. For starters, there's real value in using AI with an eye toward evaluating its ability to solve problems. We're not going to uncover AI's most productive use cases unless folks from a range of backgrounds test its application to new and complex problems. If I were a gambling man, I'd wager that the main uses of AI five years from now will be wildly different from those today. That process can be expedited by empowering more people to thoughtfully and loudly experiment with AI. Loudly means that when AI goes well or goes wrong, users share that outcome. And when sharing that outcome, be specific. Vague frustration doesn't move anyone. Name the exact barriers, regulations, and assumptions standing in the way of a more prosperous and just world.
Specific complaints are only useful, though, if they enter a broader conversation—and that requires honesty about how we're actually using these tools. As Ethan Mollick and others have observed, there are "secret cyborgs" out there who are hesitant to share the fact that they're using AI. This is a net negative behavior. It hinders the open dialogue necessary for an evidence-driven approach to collectively deciding when and how to use AI. Folks should not be ashamed of using AI but rather celebrated for testing how a new tool can solve old problems.
Disclosure is the floor. The ceiling is something more ambitious: deciding, together, what good AI use actually looks like. Here's what nobody in the AI debate is saying: we don't need the government to set AI norms, and we shouldn't trust the labs to do it either. We need each other—and we need infrastructure to make that possible. Think of a searchable, community-built platform—an AI Policy Commons—where anyone can propose an AI usage policy for a specific context: a parent submits a prompt designed to help her kid learn without outsourcing thinking; a faith community posts guidelines for using AI in pastoral care; a teachers' union publishes a vetted set of classroom norms. Others find those policies, test them, and report back. Over time, reputational signals emerge—you can filter by what your church endorses, what a leading education organization has field-tested, or what other parents in similar situations have rated most effective. No state-based mandate. No excessive control by a private actor. Just distributed experimentation producing something neither a legislature nor a lab could generate: norms with real-world legitimacy. Civil society has always been where culture gets made. The AI Policy Commons is how we do that for this moment.
None of this works, however, without better inputs. Cultural norms shape how we use AI; data determines what AI can do. We will not have better AI until we have better data. Americans should be able to access and direct their data—donating health data to AI researchers working on new cures, allowing a child's educational data to go to a startup working on tools for kids with similar learning challenges. Data should not be regarded as a liability to be minimized. It's the raw material of the AI we actually want. We can and should examine how to bring about that future.
If you're not satisfied with today's AI, do something about it. The people who will shape what this technology becomes are the ones willing to use it, critique it, and demand the conditions that make it better.
Kevin Frazier is a Senior Fellow at the Abundance Institute, directs the AI Innovation and Law Program at the University of Texas School of Law.



















