Skip to content
Search

Latest Stories

Top Stories

Preparing for an inevitable AI emergency

Microchip labeled "AI"
Eugene Mymrin/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University. Starting this summer, he will serve as a Tarbell fellow.

Artificial intelligence is advancing at a speed and in ways that were unanticipated even by the foremost AI experts. Just a few decades ago, AI was largely theoretical, existing primarily in the realms of science fiction and academic research. Today, AI permeates nearly every aspect of our lives, from the algorithms that curate social media feeds to the autonomous systems that drive cars. This rapid advancement, while promising in many respects, also heralds a new era of uncertainty and potential peril.


The pace at which AI technology is evolving outstrips our ability to predict its trajectory. Breakthroughs occur at a staggering rate, often in areas previously deemed infeasible or far-off. For instance, the development of GPT-3, an AI language model capable of producing human-like text, astonished even seasoned AI researchers with its capabilities and the speed at which it surpassed its predecessors. Such rapid advancements suggest that the future of AI holds both immense potential and significant risks.

One of the most pressing concerns is the increased likelihood of emergencies exacerbated by AI. More sophisticated AI could enable more complex and devastating cyberattacks, as malicious actors leverage AI to breach security systems that were previously impenetrable. Similarly, advances in AI-driven biotechnology could lead to the creation of more deadly bioweapons, posing new and unprecedented threats to global security. Moreover, the rapid automation of jobs could lead to widespread unemployment, causing significant social disruption. The displacement of workers by AI could further entrench economic inequality and trigger unrest, as societies struggle to adapt to these changes.

The likelihood of an AI emergency paired with our poor track record of responding to similar emergencies is cause for concern. The Covid-19 pandemic starkly highlighted the inadequacies of our constitutional order in emergency responses. The pandemic exposed deep flaws in our preparedness and response mechanisms, demonstrating how ill-equipped we are to handle sudden, large-scale crises. Our fragmented political system, with its layers of bureaucracy and competing jurisdictions, proved unable to respond swiftly and effectively. This deficiency raises serious concerns about our ability to manage future emergencies, particularly those that could be precipitated by AI.

Given the profound uncertainty surrounding when and how an AI accident might occur and the potential damage it could cause, it is imperative that AI companies bear a significant responsibility for helping us prepare for such eventualities. The private sector, which stands to benefit enormously from AI advancements, must also contribute to safeguarding society against the risks these technologies pose. One concrete step that AI companies should take is to establish an emergency fund specifically intended for responding to AI-related accidents.

Such a fund would serve as a financial safety net, providing resources to mitigate the effects of AI emergencies. It could be used to support rapid response efforts, fund research into preventative measures, and assist individuals and communities affected by AI-driven disruptions. By contributing to this fund, AI companies would acknowledge their role in creating technologies that, while beneficial, also carry inherent risks. This approach would not only demonstrate corporate responsibility but also help ensure that society is better prepared to respond to AI-related crises.

The establishment of an emergency fund for AI disasters would require a collaborative effort between the private sector and government. Congress could mandate contributions from AI companies based on their revenue or the scale of their AI operations. This would ensure that the financial burden of preparing for AI emergencies is shared equitably and that sufficient resources are available when needed. To safeguard the proper use of the funds, Congress should establish an independent entity tasked with securing contributions and responding to claims for reimbursement.

In conclusion, the rapid advancement of AI presents both incredible opportunities and significant risks. While we cannot predict exactly how AI will evolve or what specific emergencies it may precipitate, we can take proactive steps to prepare for these eventualities. AI companies, as key stakeholders in the development and deployment of these technologies, must play a central role in this effort. By contributing to an emergency fund for AI disasters, they can help ensure that we are equipped to respond to crises in a legitimate and effective fashion.

AI models are being built. Accidents will come. The question is whether we will be prepared to respond in a legitimate and effective fashion.

Read More

The American Schism in 2025: The New Cultural Revolution

A street vendor selling public domain Donald Trump paraphernalia and souvenirs. The souvenirs are located right across the street from the White House and taken on the afternoon of July 21, 2019 near Pennslyvania Avenue in Washington, D.C.

Getty Images, P_Wei

The American Schism in 2025: The New Cultural Revolution

A common point of bewilderment today among many of Trump’s “establishment” critics is the all too tepid response to Trump’s increasingly brazen shattering of democratic norms. True, he started this during his first term, but in his second, Trump seems to relish the weaponization of his presidency to go after his enemies and to brandish his corrupt dealings, all under the Trump banner (e.g. cyber currency, Mideast business dealings, the Boeing 747 gift from Qatar). Not only does Trump conduct himself with impunity but Fox News and other mainstream media outlets barely cover them at all. (And when left-leaning media do, the interest seems to wane quickly.)

Here may be the source of the puzzlement: the left intelligentsia continues to view and characterize MAGA as a political movement, without grasping its transcendence into a new dominant cultural order. MAGA rose as a counter-establishment partisan drive during Trump’s 2016 campaign and subsequent first administration; however, by the 2024 election, it became evident that MAGA was but the eye of a full-fledged cultural shift, in some ways akin to Mao’s Cultural Revolution.

Keep ReadingShow less
Should States Regulate AI?

Rep. Jay Obernolte, R-CA, speaks at an AI conference on Capitol Hill with experts

Provided

Should States Regulate AI?

WASHINGTON —- As House Republicans voted Thursday to pass a 10-year moratorium on AI regulation by states, Rep. Jay Obernolte, R-CA, and AI experts said the measure would be necessary to ensure US dominance in the industry.

“We want to make sure that AI continues to be led by the United States of America, and we want to make sure that our economy and our society realizes the potential benefits of AI deployment,” Obernolte said.

Keep ReadingShow less
The AI Race We Need: For a Better Future, Not Against Another Nation

The concept of AI hovering among the public.

Getty Images, J Studios

The AI Race We Need: For a Better Future, Not Against Another Nation

The AI race that warrants the lion’s share of our attention and resources is not the one with China. Both superpowers should stop hurriedly pursuing AI advances for the sake of “beating” the other. We’ve seen such a race before. Both participants lose. The real race is against an unacceptable status quo: declining lifespans, increasing income inequality, intensifying climate chaos, and destabilizing politics. That status quo will drag on, absent the sorts of drastic improvements AI can bring about. AI may not solve those problems but it may accelerate our ability to improve collective well-being. That’s a race worth winning.

Geopolitical races have long sapped the U.S. of realizing a better future sooner. The U.S. squandered scarce resources and diverted talented staff to close the alleged missile gap with the USSR. President Dwight D. Eisenhower rightfully noted, “Every gun that is made, every warship launched, every rocket fired signifies, in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed.” He realized that every race comes at an immense cost. In this case, the country was “spending the sweat of its laborers, the genius of its scientists, the hopes of its children.”

Keep ReadingShow less
Closeup of Software engineering team engaged in problem-solving and code analysis

Closeup of Software engineering team engaged in problem-solving and code analysis.

Getty Images, MTStock Studio

AI Is Here. Our Laws Are Stuck in the Past.

Artificial intelligence (AI) promises a future once confined to science fiction: personalized medicine accounting for your specific condition, accelerated scientific discovery addressing the most difficult challenges, and reimagined public education designed around AI tutors suited to each student's learning style. We see glimpses of this potential on a daily basis. Yet, as AI capabilities surge forward at exponential speed, the laws and regulations meant to guide them remain anchored in the twentieth century (if not the nineteenth or eighteenth!). This isn't just inefficient; it's dangerously reckless.

For too long, our approach to governing new technologies, including AI, has been one of cautious incrementalism—trying to fit revolutionary tools into outdated frameworks. We debate how century-old privacy torts apply to vast AI training datasets, how liability rules designed for factory machines might cover autonomous systems, or how copyright law conceived for human authors handles AI-generated creations. We tinker around the edges, applying digital patches to analog laws.

Keep ReadingShow less