Skip to content
Search

Latest Stories

Top Stories

Preparing for an inevitable AI emergency

Microchip labeled "AI"
Eugene Mymrin/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University. Starting this summer, he will serve as a Tarbell fellow.

Artificial intelligence is advancing at a speed and in ways that were unanticipated even by the foremost AI experts. Just a few decades ago, AI was largely theoretical, existing primarily in the realms of science fiction and academic research. Today, AI permeates nearly every aspect of our lives, from the algorithms that curate social media feeds to the autonomous systems that drive cars. This rapid advancement, while promising in many respects, also heralds a new era of uncertainty and potential peril.


The pace at which AI technology is evolving outstrips our ability to predict its trajectory. Breakthroughs occur at a staggering rate, often in areas previously deemed infeasible or far-off. For instance, the development of GPT-3, an AI language model capable of producing human-like text, astonished even seasoned AI researchers with its capabilities and the speed at which it surpassed its predecessors. Such rapid advancements suggest that the future of AI holds both immense potential and significant risks.

One of the most pressing concerns is the increased likelihood of emergencies exacerbated by AI. More sophisticated AI could enable more complex and devastating cyberattacks, as malicious actors leverage AI to breach security systems that were previously impenetrable. Similarly, advances in AI-driven biotechnology could lead to the creation of more deadly bioweapons, posing new and unprecedented threats to global security. Moreover, the rapid automation of jobs could lead to widespread unemployment, causing significant social disruption. The displacement of workers by AI could further entrench economic inequality and trigger unrest, as societies struggle to adapt to these changes.

Sign up for The Fulcrum newsletter

The likelihood of an AI emergency paired with our poor track record of responding to similar emergencies is cause for concern. The Covid-19 pandemic starkly highlighted the inadequacies of our constitutional order in emergency responses. The pandemic exposed deep flaws in our preparedness and response mechanisms, demonstrating how ill-equipped we are to handle sudden, large-scale crises. Our fragmented political system, with its layers of bureaucracy and competing jurisdictions, proved unable to respond swiftly and effectively. This deficiency raises serious concerns about our ability to manage future emergencies, particularly those that could be precipitated by AI.

Given the profound uncertainty surrounding when and how an AI accident might occur and the potential damage it could cause, it is imperative that AI companies bear a significant responsibility for helping us prepare for such eventualities. The private sector, which stands to benefit enormously from AI advancements, must also contribute to safeguarding society against the risks these technologies pose. One concrete step that AI companies should take is to establish an emergency fund specifically intended for responding to AI-related accidents.

Such a fund would serve as a financial safety net, providing resources to mitigate the effects of AI emergencies. It could be used to support rapid response efforts, fund research into preventative measures, and assist individuals and communities affected by AI-driven disruptions. By contributing to this fund, AI companies would acknowledge their role in creating technologies that, while beneficial, also carry inherent risks. This approach would not only demonstrate corporate responsibility but also help ensure that society is better prepared to respond to AI-related crises.

The establishment of an emergency fund for AI disasters would require a collaborative effort between the private sector and government. Congress could mandate contributions from AI companies based on their revenue or the scale of their AI operations. This would ensure that the financial burden of preparing for AI emergencies is shared equitably and that sufficient resources are available when needed. To safeguard the proper use of the funds, Congress should establish an independent entity tasked with securing contributions and responding to claims for reimbursement.

In conclusion, the rapid advancement of AI presents both incredible opportunities and significant risks. While we cannot predict exactly how AI will evolve or what specific emergencies it may precipitate, we can take proactive steps to prepare for these eventualities. AI companies, as key stakeholders in the development and deployment of these technologies, must play a central role in this effort. By contributing to an emergency fund for AI disasters, they can help ensure that we are equipped to respond to crises in a legitimate and effective fashion.

AI models are being built. Accidents will come. The question is whether we will be prepared to respond in a legitimate and effective fashion.

Read More

Computer image of a person speaking
ArtemisDiana/Getty Images

Overcoming AI voice cloning attacks on election integrity

Levine is an election integrity and management consultant who works to ensure that eligible voters can vote, free and fair elections are perceived as legitimate, and election processes are properly administered and secured.

Imagine it’s Election Day. You’re getting ready to go vote when you receive a call from a public official telling you to vote at an early voting location rather than your Election Day polling site. So, you go there only to discover it’s closed. Turns out that the call wasn’t from the public official but from a replica created by voice cloning technology.

That might sound like something out of a sci-fi movie, but many New Hampshire voters experienced something like it two days before the 2024 presidential primary. They received robocalls featuring a deepfake simulating the voice of President Joe Biden that discouraged them from participating in the primary.

Keep ReadingShow less
Robotic hand holding a ballot
Alfieri/Getty Images

What happens when voters cede their ballots to AI agents?

Frazier is an assistant professor at the Crump College of Law at St. Thomas University. Starting this summer, he will serve as a Tarbell fellow.

With the supposed goal of diversifying the electorate and achieving more representative results, State Y introduces “VoteGPT.” This artificial intelligence agent studies your social media profiles, your tax returns and your streaming accounts to develop a “CivicU.” This artificial clone would use that information to serve as your democratic proxy.

Keep ReadingShow less
Sen. Ron Johnson in front of a chart

Sen. Ron Johnson claims President Biden has allowed 1,700 terrorists to enter the country. That total refers to encounters (people who were stopped)

Tom Williams/CQ-Roll Call, Inc via Getty Images

Has President Joe Biden ‘let in’ nearly 1,700 people with links to terrorism?

This fact brief was originally published by Wisconsin Watch. Read the original here. Fact briefs are published by newsrooms in the Gigafact network, and republished by The Fulcrum. Visit Gigafact to learn more.

Has President Joe Biden ‘let in’ nearly 1,700 people with links to terrorism?

No.

Border agents have encountered individuals on the federal terrorist watchlist nearly 1,700 times since President Joe Biden took office — that means those people were stopped while trying to enter the U.S.

Keep ReadingShow less
Social media app icons
hapabapa/Getty Images

Urban planning can counter social media’s impact on young people

Dr. Jones is a grassroot urban planner, architectural designer, and public policy advocate. She was recently a public voice fellow through The OpEd Project.

Despite the breathtaking beauty of our world, many young people remain oblivious to it, ensnared by the all-consuming grip of social media. A recent Yale Medicine report revealed the rising negative impact social media has on teens, as this digital entrapment rewires their brains and leads to alarming mental and physical health struggles. Tragically, they are deprived of authentic life experiences, having grown up in a reality where speculation overshadows genuine interactions.

For the sake of our society’s future, we must urgently curb social media’s dominance and promote real-world exploration through urban planning that ensures accessible, enriching environments for all economic levels to safeguard the mental and physical health of the young.

Keep ReadingShow less
podcast mic in the middle of a red and blue America
Topdesigner/Getty Images

Fellowship brings Gen Z voices into democracy and podcasting

Spinelle is the founder of The Democracy Group podcast network and the communications lead for the McCourtney Institute for Democracy at Penn State.

According to Edison Research, nearly half of Gen Z are monthly podcast listeners. But their voices are largely absent from podcasts about democracy, civic engagement and civil discourse. The Democracy Group’s podcast fellowship, which recently completed its third cohort, aims to change that.

Keep ReadingShow less