Skip to content
Search

Latest Stories

Top Stories

Is AI too big to fail?

Nvidia building and logo

The world came to a near standstill last month as everyone awaited Nvidia’s financial outlook.

Cheng Xin/Getty Images

This is the first entry in “ Big Tech and Democracy,” a series designed to assist American citizens in understanding the impact technology is having — and will have — on our democracy. The series will explore the benefits and risks that lie ahead and offer possible solutions.

In the span of two or so years, OpenAI, Nvidia and a handful of other companies essential to the development of artificial intelligence have become economic behemoths. Their valuations and stock prices have soared. Their products have become essential to Fortune 500 companies. Their business plans are the focus of the national security industry. Their collapse would be, well, unacceptable. They are too big to fail.

The good news is we’ve been in similar situations before. The bad news is we’ve yet to really learn our lesson.


In the mid-1970s, a bank known for its conservative growth strategy decided to more aggressively pursue profits. The strategy worked. In just a few years the bank became the largest commercial and industrial lender in the nation. The impressive growth caught the attention of others — competitors looked on with envy, shareholders with appreciation and analysts with bullish optimism. As the balance sheet grew, however, so did the broader economic importance of the bank. It became too big to fail.

Regulators missed the signs of systemic risk. A kick of the bank’s tires gave no reason to panic. But a look under the hood — specifically, at the bank’s loan-to-assets ratio and average return on loans — would have revealed a simple truth: The bank had been far too risky. The tactics that fueled its “go-go” years rendered the bank over exposed to sectors suffering tough economic times. Rumors soon spread that the bank was in a financially sketchy spot. It was the Titanic, without the band, to paraphrase an employee.

When the inevitable run on the bank started, regulators had no choice but to spend billions on keeping the bank afloat — saving it from sinking and bringing the rest of the economy with it. Of course, a similar situation played out during the Great Recession — risky behavior by a few bad companies imposed bailout payments on the rest of us.

AI labs are similarly taking gambles that have good odds of making many of us losers. As major labs rush to release their latest models, they are not stopping to ask if we have the social safety nets ready if things backfire. Nor are they meaningfully contributing to building those necessary safeguards. Instead, we find ourselves in a highly volatile situation. Our stock market seemingly pivots on earnings of just a few companies — the world came to a near standstill last month as everyone awaited Nvidia’s financial outlook. Our leading businesses and essential government services are quick to adopt the latest AI models despite real uncertainty as to whether they will operate as intended. If any of these labs took a financial tumble or any of the models were significantly flawed, the public would likely again be asked to find a way to save the risk takers.

This outcome may be likely but it’s not inevitable. The Dodd Frank Act passed in response to the Great Recession and intended to prevent another Too Big to Fail situation in the financial sector has been roundly criticized for its inadequacy. We should learn from its faults in thinking through how to make sure AI goliaths don’t crush all of us Davids. Some sample steps include mandating and enforcing more rigorous testing of AI models before deployment. It would also behoove us to prevent excessive reliance on any one model by the government — this could be accomplished by requiring public service providers to maintain analog processes in the event of emergencies. Finally, we can reduce the economic sway of a few labs by fostering more competition in the space.

Too Big to Fail scenarios have happened on too many occasions. There’s no excuse for allowing AI labs to become so large and so essential that we collectively end up paying for their mistakes.

Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.

Read More

When Good Intentions Kill Cures: A Warning on AI Regulation

Kevin Frazier warns that one-size-fits-all AI laws risk stifling innovation. Learn the 7 “sins” policymakers must avoid to protect progress.

Getty Images, Aitor Diago

When Good Intentions Kill Cures: A Warning on AI Regulation

Imagine it is 2028. A start-up in St. Louis trains an AI model that can spot pancreatic cancer six months earlier than the best radiologists, buying patients precious time that medicine has never been able to give them. But the model never leaves the lab. Why? Because a well-intentioned, technology-neutral state statute drafted in 2025 forces every “automated decision system” to undergo a one-size-fits-all bias audit, to be repeated annually, and to be performed only by outside experts who—three years in—still do not exist in sufficient numbers. While regulators scramble, the company’s venture funding dries up, the founders decamp to Singapore, and thousands of Americans are deprived of an innovation that would have saved their lives.

That grim vignette is fictional—so far. But it is the predictable destination of the seven “deadly sins” that already haunt our AI policy debates. Reactive politicians are at risk of passing laws that fly in the face of what qualifies as good policy for emerging technologies.

Keep ReadingShow less
Why Journalists Must Stand Firm in the Face of Threats to Democracy
a cup of coffee and a pair of glasses on a newspaper
Photo by Ashni on Unsplash

Why Journalists Must Stand Firm in the Face of Threats to Democracy

The United States is living through a moment of profound democratic vulnerability. I believe the Trump administration has worked in ways that weaken trust in our institutions, including one of democracy’s most essential pillars: a free and independent press. In my view, these are not abstract risks but deliberate attempts to discredit truth-telling. That is why, now more than ever, I think journalists must recommit themselves to their core duty of telling the truth, holding power to account, and giving voice to the people.

As journalists, I believe we do not exist to serve those in office. Our loyalty should be to the public, to the people who trust us with their stories, not to officials who often seek to mold the press to favor their agenda. To me, abandoning that principle would be to betray not just our profession but democracy itself.

Keep ReadingShow less
Fighting the Liar’s Dividend: A Toolkit for Truth in the Digital Age

In 2023, the RAND Corporation released a study on a phenomenon known as "Truth Decay," where facts become blurred with opinion and spin. But now, people are beginning to doubt everything, including authentic material.

Getty Images, VioletaStoimenova

Fighting the Liar’s Dividend: A Toolkit for Truth in the Digital Age

The Stakes: When Nothing Can Be Trusted

Two weeks before the 2024 election, a fake robocall mimicking President Biden's voice urged voters to skip the New Hampshire primary. According to AP News, it was an instance of AI-enabled election interference. Within hours, thousands had shared it. Each fake like this erodes confidence in the very possibility of knowing what is real.

The RAND Corporation refers to this phenomenon as "Truth Decay," where facts become blurred with opinion and spin. Its 2023 research warns that Truth Decay threatens U.S. national security by weakening military readiness and eroding credibility with allies. But the deeper crisis isn't that people believe every fake—it's that they doubt everything, including authentic material.

Keep ReadingShow less
From TikTok to Telehealth: 3 Ways Medicine Must Evolve to Reach Gen Z
person wearing lavatory gown with green stethoscope on neck using phone while standing

From TikTok to Telehealth: 3 Ways Medicine Must Evolve to Reach Gen Z

Ask people how much they expect to change over the next 10 years, and most will say “not much.” Ask them how much they’ve changed in the past decade, and the answer flips. Regardless of age, the past always feels more transformative than the future.

This blind spot has a name: the end-of-history illusion. The result is a persistent illusion that life, and the values and behaviors that shape it, will remain unchanged.

Keep ReadingShow less