Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Build Better AI

Opinion

The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.


Yet there's a justifiable sense that AI is falling short of its potential. An understandable response is to oppose further AI development efforts. Perhaps the thinking goes, "Well, if it's this bad after this much money has been spent, then what's the point of burning even more resources on AI?"

But if any one of those prior examples had you nodding along—thinking, "It'd sure be nice to improve education, make daily life more inclusive, or help folks through difficult mental moments"—then you're already part of the coalition to make AI better. Let's make that our shared agenda.

That agenda calls for a few concrete actions. For starters, there's real value in using AI with an eye toward evaluating its ability to solve problems. We're not going to uncover AI's most productive use cases unless folks from a range of backgrounds test its application to new and complex problems. If I were a gambling man, I'd wager that the main uses of AI five years from now will be wildly different from those today. That process can be expedited by empowering more people to thoughtfully and loudly experiment with AI. Loudly means that when AI goes well or goes wrong, users share that outcome. And when sharing that outcome, be specific. Vague frustration doesn't move anyone. Name the exact barriers, regulations, and assumptions standing in the way of a more prosperous and just world.

Specific complaints are only useful, though, if they enter a broader conversation—and that requires honesty about how we're actually using these tools. As Ethan Mollick and others have observed, there are "secret cyborgs" out there who are hesitant to share the fact that they're using AI. This is a net negative behavior. It hinders the open dialogue necessary for an evidence-driven approach to collectively deciding when and how to use AI. Folks should not be ashamed of using AI but rather celebrated for testing how a new tool can solve old problems.

Disclosure is the floor. The ceiling is something more ambitious: deciding, together, what good AI use actually looks like. Here's what nobody in the AI debate is saying: we don't need the government to set AI norms, and we shouldn't trust the labs to do it either. We need each other—and we need infrastructure to make that possible. Think of a searchable, community-built platform—an AI Policy Commons—where anyone can propose an AI usage policy for a specific context: a parent submits a prompt designed to help her kid learn without outsourcing thinking; a faith community posts guidelines for using AI in pastoral care; a teachers' union publishes a vetted set of classroom norms. Others find those policies, test them, and report back. Over time, reputational signals emerge—you can filter by what your church endorses, what a leading education organization has field-tested, or what other parents in similar situations have rated most effective. No state-based mandate. No excessive control by a private actor. Just distributed experimentation producing something neither a legislature nor a lab could generate: norms with real-world legitimacy. Civil society has always been where culture gets made. The AI Policy Commons is how we do that for this moment.

None of this works, however, without better inputs. Cultural norms shape how we use AI; data determines what AI can do. We will not have better AI until we have better data. Americans should be able to access and direct their data—donating health data to AI researchers working on new cures, allowing a child's educational data to go to a startup working on tools for kids with similar learning challenges. Data should not be regarded as a liability to be minimized. It's the raw material of the AI we actually want. We can and should examine how to bring about that future.

If you're not satisfied with today's AI, do something about it. The people who will shape what this technology becomes are the ones willing to use it, critique it, and demand the conditions that make it better.


Kevin Frazier is a Senior Fellow at the Abundance Institute, directs the AI Innovation and Law Program at the University of Texas School of Law.


Read More

Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep Reading Show less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep Reading Show less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep Reading Show less
Trump Administration’s Escalating Attacks on Media Raise Concerns about Trust in Media, Self-Censorship

U.S. President Donald Trump speaks to reporters before boarding Air Force One at Palm Beach International Airport on March 23, 2026 in West Palm Beach, Florida.

(Photo by Roberto Schmidt/Getty Images)

Trump Administration’s Escalating Attacks on Media Raise Concerns about Trust in Media, Self-Censorship

WASHINGTON – Independent journalist Georgia Fort filmed federal agents outside of her home on Jan. 30. They were coming to arrest her in connection with reporting and filming at an anti-ICE protest in Minneapolis, Minn., almost two weeks prior.

“I don’t feel like I have my First Amendment right as a member of the press,” said Fort in video footage shared with CNN.

Keep Reading Show less