Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Preparing for an inevitable AI emergency

Microchip labeled "AI"
Eugene Mymrin/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University. Starting this summer, he will serve as a Tarbell fellow.

Artificial intelligence is advancing at a speed and in ways that were unanticipated even by the foremost AI experts. Just a few decades ago, AI was largely theoretical, existing primarily in the realms of science fiction and academic research. Today, AI permeates nearly every aspect of our lives, from the algorithms that curate social media feeds to the autonomous systems that drive cars. This rapid advancement, while promising in many respects, also heralds a new era of uncertainty and potential peril.


The pace at which AI technology is evolving outstrips our ability to predict its trajectory. Breakthroughs occur at a staggering rate, often in areas previously deemed infeasible or far-off. For instance, the development of GPT-3, an AI language model capable of producing human-like text, astonished even seasoned AI researchers with its capabilities and the speed at which it surpassed its predecessors. Such rapid advancements suggest that the future of AI holds both immense potential and significant risks.

One of the most pressing concerns is the increased likelihood of emergencies exacerbated by AI. More sophisticated AI could enable more complex and devastating cyberattacks, as malicious actors leverage AI to breach security systems that were previously impenetrable. Similarly, advances in AI-driven biotechnology could lead to the creation of more deadly bioweapons, posing new and unprecedented threats to global security. Moreover, the rapid automation of jobs could lead to widespread unemployment, causing significant social disruption. The displacement of workers by AI could further entrench economic inequality and trigger unrest, as societies struggle to adapt to these changes.

The likelihood of an AI emergency paired with our poor track record of responding to similar emergencies is cause for concern. The Covid-19 pandemic starkly highlighted the inadequacies of our constitutional order in emergency responses. The pandemic exposed deep flaws in our preparedness and response mechanisms, demonstrating how ill-equipped we are to handle sudden, large-scale crises. Our fragmented political system, with its layers of bureaucracy and competing jurisdictions, proved unable to respond swiftly and effectively. This deficiency raises serious concerns about our ability to manage future emergencies, particularly those that could be precipitated by AI.

Given the profound uncertainty surrounding when and how an AI accident might occur and the potential damage it could cause, it is imperative that AI companies bear a significant responsibility for helping us prepare for such eventualities. The private sector, which stands to benefit enormously from AI advancements, must also contribute to safeguarding society against the risks these technologies pose. One concrete step that AI companies should take is to establish an emergency fund specifically intended for responding to AI-related accidents.

Such a fund would serve as a financial safety net, providing resources to mitigate the effects of AI emergencies. It could be used to support rapid response efforts, fund research into preventative measures, and assist individuals and communities affected by AI-driven disruptions. By contributing to this fund, AI companies would acknowledge their role in creating technologies that, while beneficial, also carry inherent risks. This approach would not only demonstrate corporate responsibility but also help ensure that society is better prepared to respond to AI-related crises.

The establishment of an emergency fund for AI disasters would require a collaborative effort between the private sector and government. Congress could mandate contributions from AI companies based on their revenue or the scale of their AI operations. This would ensure that the financial burden of preparing for AI emergencies is shared equitably and that sufficient resources are available when needed. To safeguard the proper use of the funds, Congress should establish an independent entity tasked with securing contributions and responding to claims for reimbursement.

In conclusion, the rapid advancement of AI presents both incredible opportunities and significant risks. While we cannot predict exactly how AI will evolve or what specific emergencies it may precipitate, we can take proactive steps to prepare for these eventualities. AI companies, as key stakeholders in the development and deployment of these technologies, must play a central role in this effort. By contributing to an emergency fund for AI disasters, they can help ensure that we are equipped to respond to crises in a legitimate and effective fashion.

AI models are being built. Accidents will come. The question is whether we will be prepared to respond in a legitimate and effective fashion.


Read More

Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less
AI - Its Use, Misuse, and Regulation
Glowing ai chip on a circuit board.
Photo by Immo Wegmann on Unsplash

AI - Its Use, Misuse, and Regulation

There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.

First, the good uses. There is no question that AI can outthink human beings, regardless of how famous or knowledgeable, because of the amount of information it can process in a short amount of time. The most powerful accounts I've read have been in the field of medical research: doctors have fed facts into AI, asking for a diagnosis or a possible remedy, and AI has come up with remarkable answers beyond the human mind's capability.

Keep ReadingShow less