Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Ten Things the Future Will Say We Got Wrong About AI

Opinion

Ten Things the Future Will Say We Got Wrong About AI

A team of

Getty Images, Dragos Condrea

As we look back on 1776 after this July 4th holiday, it's a good opportunity to skip forward and predict what our forebears will think of us. When our descendants assess our policies, ideas, and culture, what will they see? What errors, born of myopia, inertia, or misplaced priorities, will they lay at our feet regarding today's revolutionary technology—artificial intelligence? From their vantage point, with AI's potential and perils laid bare, their evaluation will likely determine that we got at least ten things wrong.

One glaring failure will be our delay in embracing obviously superior AI-driven technologies like autonomous vehicles (AVs). Despite the clear safety benefits—tens of thousands of lives saved annually, reduced congestion, enhanced accessibility—we allowed a patchwork of outdated regulations, public apprehension, and corporate squabbling to keep these life-saving machines largely off our roads. The future will see our hesitation as a moral and economic misstep, favoring human error over demonstrated algorithmic superiority.


They will also criticize our stubborn refusal to integrate AI-based policy forecasting into our legislative processes. While AI models could have analyzed the long-term societal and economic impacts of proposed laws, helping us anticipate unintended consequences and optimize for human flourishing, we largely relied on antiquated, human-limited methods. This neglect meant our policies often lagged behind technological change, undermining the very notion of effective, responsive governance.

Crucially, they will likely question our failure to establish new intellectual property frameworks even after it became evident that current copyright and patent laws disproportionately favored incumbents and no longer served their intended purpose in the age of AI. Contemporary delay reinforced monopolies, rather than fostering a vibrant, decentralized ecosystem of innovation that truly benefited independent creators and inventors.

The future will equally lament our oversight in adjusting our schools and workforce development programs. They will see our delay in instituting widespread AI literacy for the general public as a critical blunder. We did not take the requisite steps to equip citizens with the fundamental understanding to navigate an AI-saturated world—to ensure they had access to the latest tools, discern AI-generated misinformation, and grok the foundational technical aspects of AI so that they could contribute to AI policy conversations. This lapse compromised our collective pursuit of an informed, participatory democracy. Compounding this, our sluggishness in adjusting reskilling and upskilling programs meant we left vast segments of the workforce vulnerable to displacement, rather than proactively empowering them with the skills to thrive alongside AI.

Perhaps more fundamentally, they will indict our failure to see data sharing as a social good. In an era where data is the new oil (or even the new water!), we allowed its collection and control to remain highly fragmented and proprietary. We did not establish robust, ethical frameworks for data cooperatives or public data trusts that could have fueled innovation for the common good—in healthcare, urban planning, and scientific research.

From an innovation perspective, the future will see our lack of sufficient investment in basic AI research as a monumental strategic error. Our focus skewed heavily towards optimizing existing models, rather than dedicating resources to more elementary inquiries that could uncover the next generation of transformative AI systems. This shortsightedness potentially limited humanity's long-term scientific and technological trajectory. This misallocation of resources will be underscored by our prioritization of Artificial General Intelligence (AGI) over the development and deployment of robust, beneficial generic AI applications. The speculative pursuit of an arbitrary, unspecified goal often overshadowed the immense, tangible benefits that could have been realized through focused development of practical, specialized AI solutions for pressing societal problems.

Finally, our descendants will not forgive our inadequate investment in public digital infrastructure and universal access. As AI became a foundational layer for economic opportunity and civic life, we allowed a significant digital divide—now an algorithmic abyss—to persist, denying equitable access to the very tools needed to participate in the new economy. From places like New Braunfels, Texas, to rural Virginia, the future will look at our massive, energy-hungry data centers and transmission lines and ask why we also showed a lack of adequate support for the communities disrupted by the immense physical requirements of AI development. These energy-intensive facilities placed environmental and social burdens on local populations without integrating them into the AI ecosystem's benefits.

As things stand, the ledger of future complaints against us concerning AI will be long. But this prophecy need not be our destiny. By confronting these potential failures now, by prioritizing sustained innovation and adaptive governance, we can still pivot towards a future where AI serves humanity's highest aspirations. The time for foresight and courageous action is now, before the future passes its final judgment.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

Man lying in his bed, on his phone at night.

As the 2026 election approaches, doomscrolling and social media are shaping voter behavior through fear and anxiety. Learn how digital news consumption influences political decisions—and how to break the cycle for more informed voting.

Getty Images, gorodenkoff

Americans Are Doomscrolling Their Way to the Ballot Box and Only Getting Empty Promises

As the spring primary cycle ramps up, voters are deciding which candidates to elect in the November general election, but too much doomscrolling on social media is leading to uninformed — and often anxiety-based — voting. Even though online platforms and politicians may be preying on our exhaustion to further their agendas, we don’t have to fall for it this election cycle.

Doomscrolling is, unfortunately, part of daily life for many of us. It involves consuming a virtually endless amount of negative social media posts and news content, causing us to feel scared and depressed. Our brains have a hardwired negativity bias that causes us to notice potential threats and focus on them. This is exacerbated by the fact that people who closely follow or participate in politics are more likely to doomscroll.

Keep ReadingShow less
The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less