Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Build Better AI

Opinion

The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.


Yet there's a justifiable sense that AI is falling short of its potential. An understandable response is to oppose further AI development efforts. Perhaps the thinking goes, "Well, if it's this bad after this much money has been spent, then what's the point of burning even more resources on AI?"

But if any one of those prior examples had you nodding along—thinking, "It'd sure be nice to improve education, make daily life more inclusive, or help folks through difficult mental moments"—then you're already part of the coalition to make AI better. Let's make that our shared agenda.

That agenda calls for a few concrete actions. For starters, there's real value in using AI with an eye toward evaluating its ability to solve problems. We're not going to uncover AI's most productive use cases unless folks from a range of backgrounds test its application to new and complex problems. If I were a gambling man, I'd wager that the main uses of AI five years from now will be wildly different from those today. That process can be expedited by empowering more people to thoughtfully and loudly experiment with AI. Loudly means that when AI goes well or goes wrong, users share that outcome. And when sharing that outcome, be specific. Vague frustration doesn't move anyone. Name the exact barriers, regulations, and assumptions standing in the way of a more prosperous and just world.

Specific complaints are only useful, though, if they enter a broader conversation—and that requires honesty about how we're actually using these tools. As Ethan Mollick and others have observed, there are "secret cyborgs" out there who are hesitant to share the fact that they're using AI. This is a net negative behavior. It hinders the open dialogue necessary for an evidence-driven approach to collectively deciding when and how to use AI. Folks should not be ashamed of using AI but rather celebrated for testing how a new tool can solve old problems.

Disclosure is the floor. The ceiling is something more ambitious: deciding, together, what good AI use actually looks like. Here's what nobody in the AI debate is saying: we don't need the government to set AI norms, and we shouldn't trust the labs to do it either. We need each other—and we need infrastructure to make that possible. Think of a searchable, community-built platform—an AI Policy Commons—where anyone can propose an AI usage policy for a specific context: a parent submits a prompt designed to help her kid learn without outsourcing thinking; a faith community posts guidelines for using AI in pastoral care; a teachers' union publishes a vetted set of classroom norms. Others find those policies, test them, and report back. Over time, reputational signals emerge—you can filter by what your church endorses, what a leading education organization has field-tested, or what other parents in similar situations have rated most effective. No state-based mandate. No excessive control by a private actor. Just distributed experimentation producing something neither a legislature nor a lab could generate: norms with real-world legitimacy. Civil society has always been where culture gets made. The AI Policy Commons is how we do that for this moment.

None of this works, however, without better inputs. Cultural norms shape how we use AI; data determines what AI can do. We will not have better AI until we have better data. Americans should be able to access and direct their data—donating health data to AI researchers working on new cures, allowing a child's educational data to go to a startup working on tools for kids with similar learning challenges. Data should not be regarded as a liability to be minimized. It's the raw material of the AI we actually want. We can and should examine how to bring about that future.

If you're not satisfied with today's AI, do something about it. The people who will shape what this technology becomes are the ones willing to use it, critique it, and demand the conditions that make it better.


Kevin Frazier is a Senior Fellow at the Abundance Institute, directs the AI Innovation and Law Program at the University of Texas School of Law.


Read More

Judge's Gavel Hammer as a Symbol of Law and Order with Processor CPU AI Chip.

Elon Musk’s xAI company is challenging AI regulations in Colorado after losing in California, arguing that limits on artificial intelligence violate free speech. As Connecticut enforces its own AI law, this case could shape the future of AI regulation, corporate accountability, and constitutional rights in the United States.

Getty Images, Alexander Sikov

xAI Pushes Free Speech Theory Into New AI Lawsuits

Elon Musk's AI company, xAI, is on a legal road trip. After losing in California, it filed suit in Colorado asking a court to declare the state's artificial intelligence regulations unconstitutional. The argument is essentially the same one that already failed. Meet the new boss. Same as the old boss.

For Connecticut residents, this is not just the next state in the alphabet that has passed AI legislation. Connecticut was one of the first states in the nation to adopt an AI law, requiring companies to disclose when AI is being used in critical decisions like employment, housing, credit, or healthcare. That law is already drawing scrutiny from the technology industry. What xAI tried to do in California and now in Colorado is a preview of what we may face in Connecticut.

Keep ReadingShow less
Man lying in his bed, on his phone at night.

As the 2026 election approaches, doomscrolling and social media are shaping voter behavior through fear and anxiety. Learn how digital news consumption influences political decisions—and how to break the cycle for more informed voting.

Getty Images, gorodenkoff

Americans Are Doomscrolling Their Way to the Ballot Box and Only Getting Empty Promises

As the spring primary cycle ramps up, voters are deciding which candidates to elect in the November general election, but too much doomscrolling on social media is leading to uninformed — and often anxiety-based — voting. Even though online platforms and politicians may be preying on our exhaustion to further their agendas, we don’t have to fall for it this election cycle.

Doomscrolling is, unfortunately, part of daily life for many of us. It involves consuming a virtually endless amount of negative social media posts and news content, causing us to feel scared and depressed. Our brains have a hardwired negativity bias that causes us to notice potential threats and focus on them. This is exacerbated by the fact that people who closely follow or participate in politics are more likely to doomscroll.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less