Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Rainy day fund would help people who lose their jobs thanks to AI

People looking at a humanoid robot

Spectators look at Tesla's Core Technology Optimus humanoid robot at a conference in Shanghai, China, in September.

CFOTO/Future Publishing via Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.

Artificial intelligence will eliminate jobs.

Companies may not need as many workers as AI increases productivity. Others may simply be swapped out for automated systems. Call it what you want — displacement, replacement or elimination — but the outcome is the same: stagnant, struggling communities. The open question is whether we will learn from mistakes. Will we proactively take steps to support the communities most likely to bear the cost of “innovation.”


We’ve seen what happens when communities experience sustained loss of meaningful work. Globalization caused more than 70,000 factories to close and 5 million manufacturing workers to look for new jobs. Those forced to find work elsewhere rarely found a good substitute. The remaining jobs usually paid less, provided fewer benefits and afforded less security in comparison to a union job at a factory, for example.

Economists assumed that those workers would eventually move to more lucrative pastures and find the areas with more economic vibrancy. Workers stayed put. It’s hard to leave your pasture, when it’s the place you, your family and your community have long called home. This tendency to stay put, though, created a difficult reality. Suddenly, whole communities found their economic well-being on the decline. That’s a recipe for unrest.

The same story played out in my home state, Oregon. New technology and policies rendered the timber industry a dying trade. Residents of towns like Mill City, a timber town through and through, didn’t jointly march to a new area but understandably stayed where their families had established deep roots.

It’s time to stop assuming that people will give up on their communities. Home is much more than just a job. So when AI eliminates jobs, what safeguards will be in place so that people can remain in their communities and find other ways to thrive?

I don’t have a full answer to that question, but there’s at least one safeguard that deserves consideration: a rainy day fund. We don’t know when, where and how rapidly AI will upend a community’s economic well-being. That’s why we need to create a support fund that can help folks who suddenly find themselves with no good options. This would mark an improvement on unemployment because it would be specifically targeted to assist those on the losing end of our AI gamble and should be available to both laborers and local governments.

The AI companies responsible for prioritizing their pursuit of artificial general intelligence — AI systems with human-level capabilities — over community stability should front the costs of that fund. Congress can and should tax the companies actively inducing a new wave of displacement.

The fund should be dispersed upon any sizable disruption to a specific industry or sector. Both cities and workers could apply for support to weather economic doldrums and find new ways to thrive. Such support helped us all get through Covid. A similar strategy might help mitigate the worst-case scenarios associated with AI.

The potential downsides of this fund are worth the certain benefits of more resilient communities. A tax or penalty on AI would hinder the ability of AI companies to develop and deploy AI as quickly as possible. The specific allocation of that revenue to a rainy day fund might also nudge companies to avoid creating models likely to disrupt various professions. That’s all fine by me. We have survived centuries without AI, there’s no need for the latest and greatest model to come as soon as possible, especially given the immense costs of that pace of innovation.

Now is the time for Congress to enact such a proposal. Following the election, we may find Congress to be even more gridlocked and fragmented than before. Elected officials should welcome the chance to tell their constituents about a policy to bolster their economic prospects.

The urgency to address the job displacement caused by AI cannot be overstated. By establishing a rainy day fund, taxing AI companies to support displaced workers and exploring additional policies to maintain community stability, we can mitigate the adverse effects of rapid technological advancement. Congress must prioritize the well-being of communities over the relentless pursuit of AI innovation. Doing so will not only knit a stronger social fabric but also ensure AI develops in line with the public interest.


Read More

Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less
AI - Its Use, Misuse, and Regulation
Glowing ai chip on a circuit board.
Photo by Immo Wegmann on Unsplash

AI - Its Use, Misuse, and Regulation

There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.

First, the good uses. There is no question that AI can outthink human beings, regardless of how famous or knowledgeable, because of the amount of information it can process in a short amount of time. The most powerful accounts I've read have been in the field of medical research: doctors have fed facts into AI, asking for a diagnosis or a possible remedy, and AI has come up with remarkable answers beyond the human mind's capability.

Keep ReadingShow less
Overbroad AI Export Controls Risk Forfeiting the AI Race
a black keyboard with a blue button on it

Overbroad AI Export Controls Risk Forfeiting the AI Race

The nation that wins the global AI race will hold decisive military and economic advantages. That’s why President Trump’s January 2025 AI Action Plan declared: “It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

However, AI global dominance does not just mean producing the best AI systems. It also means that the American “AI Stack” – the layered collection of tools, technologies, and frameworks that organizations use to build, train, deploy, and manage artificial intelligence applications – will become the international standard for this world-changing technology. As such, advancing a commonsense export policy for American AI chips will play a decisive role in determining whether the United States remains embedded at the core of global AI development or is gradually displaced by rival systems.

Keep ReadingShow less