Skip to content
Search

Latest Stories

Top Stories

Preparing for an inevitable AI emergency

Microchip labeled "AI"
Eugene Mymrin/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University. Starting this summer, he will serve as a Tarbell fellow.

Artificial intelligence is advancing at a speed and in ways that were unanticipated even by the foremost AI experts. Just a few decades ago, AI was largely theoretical, existing primarily in the realms of science fiction and academic research. Today, AI permeates nearly every aspect of our lives, from the algorithms that curate social media feeds to the autonomous systems that drive cars. This rapid advancement, while promising in many respects, also heralds a new era of uncertainty and potential peril.


The pace at which AI technology is evolving outstrips our ability to predict its trajectory. Breakthroughs occur at a staggering rate, often in areas previously deemed infeasible or far-off. For instance, the development of GPT-3, an AI language model capable of producing human-like text, astonished even seasoned AI researchers with its capabilities and the speed at which it surpassed its predecessors. Such rapid advancements suggest that the future of AI holds both immense potential and significant risks.

One of the most pressing concerns is the increased likelihood of emergencies exacerbated by AI. More sophisticated AI could enable more complex and devastating cyberattacks, as malicious actors leverage AI to breach security systems that were previously impenetrable. Similarly, advances in AI-driven biotechnology could lead to the creation of more deadly bioweapons, posing new and unprecedented threats to global security. Moreover, the rapid automation of jobs could lead to widespread unemployment, causing significant social disruption. The displacement of workers by AI could further entrench economic inequality and trigger unrest, as societies struggle to adapt to these changes.

Sign up for The Fulcrum newsletter

The likelihood of an AI emergency paired with our poor track record of responding to similar emergencies is cause for concern. The Covid-19 pandemic starkly highlighted the inadequacies of our constitutional order in emergency responses. The pandemic exposed deep flaws in our preparedness and response mechanisms, demonstrating how ill-equipped we are to handle sudden, large-scale crises. Our fragmented political system, with its layers of bureaucracy and competing jurisdictions, proved unable to respond swiftly and effectively. This deficiency raises serious concerns about our ability to manage future emergencies, particularly those that could be precipitated by AI.

Given the profound uncertainty surrounding when and how an AI accident might occur and the potential damage it could cause, it is imperative that AI companies bear a significant responsibility for helping us prepare for such eventualities. The private sector, which stands to benefit enormously from AI advancements, must also contribute to safeguarding society against the risks these technologies pose. One concrete step that AI companies should take is to establish an emergency fund specifically intended for responding to AI-related accidents.

Such a fund would serve as a financial safety net, providing resources to mitigate the effects of AI emergencies. It could be used to support rapid response efforts, fund research into preventative measures, and assist individuals and communities affected by AI-driven disruptions. By contributing to this fund, AI companies would acknowledge their role in creating technologies that, while beneficial, also carry inherent risks. This approach would not only demonstrate corporate responsibility but also help ensure that society is better prepared to respond to AI-related crises.

The establishment of an emergency fund for AI disasters would require a collaborative effort between the private sector and government. Congress could mandate contributions from AI companies based on their revenue or the scale of their AI operations. This would ensure that the financial burden of preparing for AI emergencies is shared equitably and that sufficient resources are available when needed. To safeguard the proper use of the funds, Congress should establish an independent entity tasked with securing contributions and responding to claims for reimbursement.

In conclusion, the rapid advancement of AI presents both incredible opportunities and significant risks. While we cannot predict exactly how AI will evolve or what specific emergencies it may precipitate, we can take proactive steps to prepare for these eventualities. AI companies, as key stakeholders in the development and deployment of these technologies, must play a central role in this effort. By contributing to an emergency fund for AI disasters, they can help ensure that we are equipped to respond to crises in a legitimate and effective fashion.

AI models are being built. Accidents will come. The question is whether we will be prepared to respond in a legitimate and effective fashion.

Read More

Close-up of boy looking at his phone in the dark
Anastasiia Sienotova/Getty Images

Reality bytes: Kids confuse the real world with the screen world

Patel is an executive producer/director, the creator of “ConnectEffect” and a Builders movement partner.

Doesn’t it feel like summer break just began? Yet here we are again. Fall’s arrival means kids have settled into a new school year with new teachers, new clothes and a new “attitude” for parents and kids alike, to start on the right foot.

Yet it’s hard for any of us to find footing in an increasingly polarized and isolated world. The entire nation is grappling with a rising tide of mental health concerns — including the continually increasing alienation and loneliness in children — and parents are struggling to foster real human connection for their kids in the real world. The battle to minimize screen time is certainly one approach. But in a world that is based on screens, apps and social media, is it a battle that realistically can be won?

Keep ReadingShow less
NVIDIA headquarters

Our stock market pivots on the performance of a handful of AI-focused companies like Nvidia.

hapabapa/Getty Images

We may face another 'too big to fail' scenario as AI labs go unchecked

Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.

In the span of two or so years, OpenAI, Nvidia and a handful of other companies essential to the development of artificial intelligence have become economic behemoths. Their valuations and stock prices have soared. Their products have become essential to Fortune 500 companies. Their business plans are the focus of the national security industry. Their collapse would be, well, unacceptable. They are too big to fail.

The good news is we’ve been in similar situations before. The bad news is we’ve yet to really learn our lesson.

Keep ReadingShow less
"Swing state" sign under a cutout of Pennsylvania
gguy44/Getty Images

Election Overtime project prepares Pennsylvania media for Nov. 5

A new set of complementary tools designed to support accurate reporting of contested elections will be unveiled by the Election Reformers Network and other election law experts on Wednesday.

The Election Overtime project will provide journalists covering Pennsylvania’s 2024 general election with media briefings by election specialists; guides for reporting on election transparency, verification processes and judicial procedures; and an extensive speakers bureau. The briefing is designed for journalists but is open to the public. Register now.

Keep ReadingShow less
Teenage girls lying on bed looking at smart phones
The Good Brigade/Getty Images

Instagram teen accounts: Just one front in the fight for mental health

Guillermo is the CEO of Ignite, a political leadership program for young women.

It’s good news that Instagram has launched stricter controls for teen accounts, strengthening privacy settings for those under 18. Underage users’ accounts are now automatically set to private mode. The platform is also implementing tighter restrictions on the type of content teens can browse and blocking material deemed sensitive, such as posts related to cosmetic procedures or eating disorders.

Keep ReadingShow less