Skip to content
Search

Latest Stories

Top Stories

The AI Race We Need: For a Better Future, Not Against Another Nation

Opinion

The AI Race We Need: For a Better Future, Not Against Another Nation

The concept of AI hovering among the public.

Getty Images, J Studios

The AI race that warrants the lion’s share of our attention and resources is not the one with China. Both superpowers should stop hurriedly pursuing AI advances for the sake of “beating” the other. We’ve seen such a race before. Both participants lose. The real race is against an unacceptable status quo: declining lifespans, increasing income inequality, intensifying climate chaos, and destabilizing politics. That status quo will drag on, absent the sorts of drastic improvements AI can bring about. AI may not solve those problems but it may accelerate our ability to improve collective well-being. That’s a race worth winning.

Geopolitical races have long sapped the U.S. of realizing a better future sooner. The U.S. squandered scarce resources and diverted talented staff to close the alleged missile gap with the USSR. President Dwight D. Eisenhower rightfully noted, “Every gun that is made, every warship launched, every rocket fired signifies, in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed.” He realized that every race comes at an immense cost. In this case, the country was “spending the sweat of its laborers, the genius of its scientists, the hopes of its children.”


President John F. Kennedy failed to heed the guidance of his predecessor. He initiated yet another geopolitical contest by publicly challenging the USSR to a space race. Privately, he too knew that such a race required substantial trade-offs. Before Sputnik, Kennedy scoffed at spending precious funds on space endeavors. Following the Bay of Pigs Invasion, Kennedy reversed course. In his search for a political win, he found space. The rest, of course, is history. It’s true that the nation’s pursuit of the moon generated significant direct and indirect benefits. What’s unknowable, though, is what benefits could have been realized if Kennedy pursued his original science agenda: large-scale desalination of seawater. That bold endeavor would have also created spin-off improvements in related fields.

Decades from now, the true “winner” of the AI race will be the country that competes in the only race that really matters—tackling the most pressing economic, social, and political problems. The country that wins that race will have a richer, healthier, and more resilient population. That country will endure when crises unfold. Others will crumble.

AI development and deployment involve finite resources. The chips, energy, and expertise that go into creating leading AI models are in short supply. Chips accumulated by OpenAI, Anthropic, Google, and other massive AI labs to train the next frontier model are chips not being used to address more socially useful ends. Likewise, an AI expert working on a new AI-driven missile system is an expert not working on how AI can solve problems that have long been put on the back burner in the name of winning the geopolitical race of the moment.

Imagine the good that could come about if instead of prioritizing the pursuit of an unreachable AI frontier, we turned already impressive models toward the problems that will shape our long-term communal success. Early signs suggest that a pivot to this race would immediately improve the status quo. First, consider the potential for rapid improvements in health brought about by better, more affordable drugs. According to the Boston Consulting Group, AI-based discoveries or designs have spurred 67 clinical trials of new drugs. AstraZeneca reported that AI had cut its drug discovery process from years to months.

Second, consider the possibility of providing every student with personalized tutoring—setting us on a path to again become the most educated and productive workforce the world over. AI programs deployed in Bhutan helped students learn math skills in a fraction of the time when compared to classmates who received traditional math instruction. Closer to home, Khanmigo— an AI platform designed by the Khan Academy —is giving students personalized lessons in 266 school districts across the United States.

Third, and finally, consider a world in which traffic fatalities were halved thanks to the broader adoption of autonomous vehicles. Autonomous Vehicle (AV) companies have leveraged AI to make rapid advances in the ability of their vehicles to drive in all conditions. Further focus on these efforts may finally make AVs the majority of cars on the road and, as a result, save thousands of lives.

To redirect our AI race toward societal benefit, we need concrete policy changes. Federal research funding should prioritize AI applications targeting our most pressing challenges—healthcare access, energy development, and educational opportunities. Complementing this approach, tax incentives could reward companies that deploy AI for measurable social impact rather than pure market dominance. Additionally, public-private partnerships, similar to the one between Texas A&M and NVIDIA involving the creation of a high-performance supercomputer, could create innovation hubs focused specifically on using AI to solve regional problems, from drought management in the Southwest to infrastructure resilience on the coasts.

The choice before us is clear: we can continue the myopic pursuit of AI superiority for its own sake, or we can choose the wiser race—one toward a more innovative and prosperous future. History will not judge us by which nation first reached some arbitrary artificial intelligence threshold but by how we wielded this transformative technology to solve problems that have plagued humanity for generations. By redirecting our finite resources—chips, energy, and human ingenuity—toward these challenges, we can ensure that the true winners of the AI revolution will be all of us, not merely one flag or another. That is a victory worth pursuing with the full measure of our national commitment and creativity.


Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

The Manosphere Is Bad for Boys and Worse for Democracy
a skeleton sitting at a desk with a laptop and keyboard
Photo by Growtika on Unsplash

The Manosphere Is Bad for Boys and Worse for Democracy

15-year-old Owen Cooper made history to become the youngest male to win an Emmy Award. In the Netflix series Adolescence, Owen plays the role of a 13-year-old schoolboy who is arrested after the murder of a girl in his school. As we follow the events leading up to the crime, the award-winning series forces us to confront legitimate insecurities that many teenage boys face, from lack of physical prowess to emotional disconnection from their fathers. It also exposes how easily young men, seeking comfort in their computers, can be pulled into online spaces that normalize misogyny and rage; a pipeline enabled by a failure of tech policy.

At the center of this danger lies the manosphere: a global network of influencers whose words can radicalize young men and channel their frustrations into violence. But this is more than a social crisis affecting some young men. It is a growing threat to the democratic values of equality and tolerance that keep us all safe.

Keep ReadingShow less
Your Data Isn’t Yours: How Social Media Platforms Profit From Your Digital Identity

Discover how your personal data is tracked, sold, and used to control your online experience—and how to reclaim your digital rights.

Getty Images, Sorapop

Your Data Isn’t Yours: How Social Media Platforms Profit From Your Digital Identity

Social media users and digital consumers willingly present a detailed trail of personal data in the pursuit of searching, watching, and engaging on as many platforms as possible. Signing up and signing on is made to be as easy as possible. Most people know on some level that they are giving up more data than they should , but with hopes that it won’t be used surreptitiously by scammers, and certainly not for surveillance of any sort.

However, in his book, "Means of Control," Byron Tau shockingly reveals how much of our digital data is tracked, packaged, and sold—not by scammers but by the brands and organizations we know and trust. As technology has deeply permeated our lives, we have willingly handed over our entire digital identity. Every app we download, every document we create, every social media site we join, there are terms and conditions that none of us ever bother to read.

That means our behaviors, content, and assets are given up to corporations that profit from them in more ways than the average person realizes. The very data and the reuse of it are controlling our lives, our freedom, and our well-being.

Let’s think about all this in the context of a social media site. It is a place where you interact with friends, post family photos, and highlight your art and videos. You may even share a perspective on current events. These very social media platforms don’t just own your content. They can use your behavior and your content to target you. They also sell your data to others, and profit massively off of YOU, their customer.

Keep ReadingShow less
A gavel next to a computer chip with the words "AI" on it.

Often, AI policy debates focus on speculative risks rather than real-world impacts. Kevin Frazier argues that lawmakers and academics must shift their focus from sci-fi scenarios to practical challenges.

Getty Images, Just_Super

Why Academic Debates About AI Mislead Lawmakers—and the Public

Picture this: A congressional hearing on “AI policy” makes the evening news. A senator gravely asks whether artificial intelligence might one day “wake up” and take over the world. Cameras flash. Headlines declare: “Lawmakers Confront the Coming Robot Threat.” Meanwhile, outside the Beltway on main streets across the country, everyday Americans worry about whether AI tools will replace them on factory floors, in call centers, or even in classrooms. Those bread-and-butter concerns—job displacement, worker retraining, and community instability—deserve placement at the top of the agenda for policymakers. Yet legislatures too often get distracted, following academic debates that may intrigue scholars but fail to address the challenges that most directly affect people’s lives.

That misalignment is no coincidence. Academic discourse does not merely fill journals; it actively shapes the policy agenda and popular conceptions of AI. Too many scholars dwell on speculative, even trivial, hypotheticals. They debate whether large language models should be treated as co-authors on scientific papers or whether AI could ever develop consciousness. These conversations filter into the media, morph into lawmaker talking points, and eventually dominate legislative hearings. The result is a political environment where sci-fi scenarios crowd out the issues most relevant to ordinary people—like how to safeguard workers, encourage innovation, and ensure fairness in critical industries. When lawmakers turn to scholars for guidance, they often encounter lofty speculation rather than clear-eyed analysis of how AI is already reshaping specific sectors.

Keep ReadingShow less
A person looking at social media app icons on a phone
A different take on social media and democracy
Matt Cardy/Getty Images

Outrage Over Accuracy: What the Los Angeles Protests Teach About Democracy Online

In Los Angeles this summer, immigration raids sparked days of street protests and a heavy government response — including curfews and the deployment of National Guard troops. But alongside the demonstrations came another, quieter battle: the fight over truth. Old protest videos resurfaced online as if they were new, AI-generated clips blurred the line between fact and fiction, and conspiracy theories about “paid actors” flooded social media feeds.

What played out in Los Angeles was not unique. It is the same dynamic Maria Ressa warned about when she accepted the Nobel Peace Prize in 2021. She described disinformation as an “invisible atomic bomb” — a destabilizing force that, like the bomb of 1945, demands new rules and institutions to contain its damage. After Hiroshima and Nagasaki, the world created the United Nations and a framework of international treaties to prevent nuclear catastrophe. Ressa argues that democracy faces a similar moment now: just as we built global safeguards for atomic power, we must now create a digital rule of law to safeguard the information systems that shape civic life.

Keep ReadingShow less