Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Preparing for an inevitable AI emergency

Microchip labeled "AI"
Eugene Mymrin/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University. Starting this summer, he will serve as a Tarbell fellow.

Artificial intelligence is advancing at a speed and in ways that were unanticipated even by the foremost AI experts. Just a few decades ago, AI was largely theoretical, existing primarily in the realms of science fiction and academic research. Today, AI permeates nearly every aspect of our lives, from the algorithms that curate social media feeds to the autonomous systems that drive cars. This rapid advancement, while promising in many respects, also heralds a new era of uncertainty and potential peril.


The pace at which AI technology is evolving outstrips our ability to predict its trajectory. Breakthroughs occur at a staggering rate, often in areas previously deemed infeasible or far-off. For instance, the development of GPT-3, an AI language model capable of producing human-like text, astonished even seasoned AI researchers with its capabilities and the speed at which it surpassed its predecessors. Such rapid advancements suggest that the future of AI holds both immense potential and significant risks.

One of the most pressing concerns is the increased likelihood of emergencies exacerbated by AI. More sophisticated AI could enable more complex and devastating cyberattacks, as malicious actors leverage AI to breach security systems that were previously impenetrable. Similarly, advances in AI-driven biotechnology could lead to the creation of more deadly bioweapons, posing new and unprecedented threats to global security. Moreover, the rapid automation of jobs could lead to widespread unemployment, causing significant social disruption. The displacement of workers by AI could further entrench economic inequality and trigger unrest, as societies struggle to adapt to these changes.

The likelihood of an AI emergency paired with our poor track record of responding to similar emergencies is cause for concern. The Covid-19 pandemic starkly highlighted the inadequacies of our constitutional order in emergency responses. The pandemic exposed deep flaws in our preparedness and response mechanisms, demonstrating how ill-equipped we are to handle sudden, large-scale crises. Our fragmented political system, with its layers of bureaucracy and competing jurisdictions, proved unable to respond swiftly and effectively. This deficiency raises serious concerns about our ability to manage future emergencies, particularly those that could be precipitated by AI.

Given the profound uncertainty surrounding when and how an AI accident might occur and the potential damage it could cause, it is imperative that AI companies bear a significant responsibility for helping us prepare for such eventualities. The private sector, which stands to benefit enormously from AI advancements, must also contribute to safeguarding society against the risks these technologies pose. One concrete step that AI companies should take is to establish an emergency fund specifically intended for responding to AI-related accidents.

Such a fund would serve as a financial safety net, providing resources to mitigate the effects of AI emergencies. It could be used to support rapid response efforts, fund research into preventative measures, and assist individuals and communities affected by AI-driven disruptions. By contributing to this fund, AI companies would acknowledge their role in creating technologies that, while beneficial, also carry inherent risks. This approach would not only demonstrate corporate responsibility but also help ensure that society is better prepared to respond to AI-related crises.

The establishment of an emergency fund for AI disasters would require a collaborative effort between the private sector and government. Congress could mandate contributions from AI companies based on their revenue or the scale of their AI operations. This would ensure that the financial burden of preparing for AI emergencies is shared equitably and that sufficient resources are available when needed. To safeguard the proper use of the funds, Congress should establish an independent entity tasked with securing contributions and responding to claims for reimbursement.

In conclusion, the rapid advancement of AI presents both incredible opportunities and significant risks. While we cannot predict exactly how AI will evolve or what specific emergencies it may precipitate, we can take proactive steps to prepare for these eventualities. AI companies, as key stakeholders in the development and deployment of these technologies, must play a central role in this effort. By contributing to an emergency fund for AI disasters, they can help ensure that we are equipped to respond to crises in a legitimate and effective fashion.

AI models are being built. Accidents will come. The question is whether we will be prepared to respond in a legitimate and effective fashion.

Read More

Censorship Should Be Obsolete by Now. Why Isn’t It?

US Capital with tech background

Greggory DiSalvo/Getty Images

Censorship Should Be Obsolete by Now. Why Isn’t It?

Techies, activists, and academics were in Paris this week to confront the doom scenario of internet shutdowns, developing creative technology and policy solutions to break out of heavily censored environments. The event– SplinterCon– has previously been held globally, from Brussels to Taiwan. I am on the programme committee and delivered a keynote at the inaugural SplinterCon in Montreal on how internet standards must be better designed for censorship circumvention.

Censorship and digital authoritarianism were exposed in dozens of countries in the recently published Freedom on the Net report. For exampl,e Russia has pledged to provide “sovereign AI,” a strategy that will surely extend its network blocks on “a wide array of social media platforms and messaging applications, urging users to adopt government-approved alternatives.” The UK joined Vietnam, China, and a growing number of states requiring “age verification,” the use of government-issued identification cards, to access internet services, which the report calls “a crisis for online anonymity.”

Keep ReadingShow less
The concept of AI hovering among the public.

Panic-driven legislation—from airline safety to AI bans—often backfires, and evidence must guide policy.

Getty Images, J Studios

Beware of Panic Policies

"As far as human nature is concerned, with panic comes irrationality." This simple statement by Professor Steve Calandrillo and Nolan Anderson has profound implications for public policy. When panic is highest, and demand for reactive policy is greatest, that's exactly when we need our lawmakers to resist the temptation to move fast and ban things. Yet, many state legislators are ignoring this advice amid public outcries about the allegedly widespread and destructive uses of AI. Thankfully, Calandrillo and Anderson have identified a few examples of what I'll call "panic policies" that make clear that proposals forged by frenzy tend not to reflect good public policy.

Let's turn first to a proposal in November of 2001 from the American Academy of Pediatrics (AAP). For obvious reasons, airline safety was subject to immense public scrutiny at this time. AAP responded with what may sound like a good idea: require all infants to have their own seat and, by extension, their own seat belt on planes. The existing policy permitted parents to simply put their kid--so long as they were under two--on their lap. Essentially, babies flew for free.

The Federal Aviation Administration (FAA) permitted this based on a pretty simple analysis: the risks to young kids without seatbelts on planes were far less than the risks they would face if they were instead traveling by car. Put differently, if parents faced higher prices to travel by air, then they'd turn to the road as the best way to get from A to B. As we all know (perhaps with the exception of the AAP at the time), airline travel is tremendously safer than travel by car. Nevertheless, the AAP forged ahead with its proposal. In fact, it did so despite admitting that they were unsure of whether the higher risks of mortality of children under two in plane crashes were due to the lack of a seat belt or the fact that they're simply fragile.

Keep ReadingShow less
Will Generative AI Robots Replace Surgeons?

Generative AI and surgical robotics are advancing toward autonomous surgery, raising new questions about safety, regulation, payment models, and trust.

Getty Images, Luis Alvarez

Will Generative AI Robots Replace Surgeons?

In medicine’s history, the best technologies didn’t just improve clinical practice. They turned traditional medicine on its head.

For example, advances like CT, MRI, and ultrasound machines did more than merely improve diagnostic accuracy. They diminished the importance of the physical exam and the physicians who excelled at it.

Keep ReadingShow less
Digital Footprints Are Affecting This New Generation of Politicians, but Do Voters Care?

Hand holding smart phone with US flag case

Credit: Katareena Roska

Digital Footprints Are Affecting This New Generation of Politicians, but Do Voters Care?

WASHINGTON — In 2022, Jay Jones sent text messages to a former colleague about a senior state Republican in Virginia getting “two bullets to the head.”

When the texts were shared by his colleague a month before the Virginia general election, Jones, the Democratic candidate for attorney general, was slammed for the violent rhetoric. Winsome Earle-Sears, the Republican candidate for governor, called for Jones to withdraw from the race.

Keep ReadingShow less