Skip to content
Search

Latest Stories

Follow Us:
Top Stories

We may face another 'too big to fail' scenario as AI labs go unchecked

NVIDIA headquarters

Our stock market pivots on the performance of a handful of AI-focused companies like Nvidia.

hapabapa/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.

In the span of two or so years, OpenAI, Nvidia and a handful of other companies essential to the development of artificial intelligence have become economic behemoths. Their valuations and stock prices have soared. Their products have become essential to Fortune 500 companies. Their business plans are the focus of the national security industry. Their collapse would be, well, unacceptable. They are too big to fail.

The good news is we’ve been in similar situations before. The bad news is we’ve yet to really learn our lesson.


In the mid-1970s, a bank known for its conservative growth strategy decided to more aggressively pursue profits. The strategy worked. In just a few years the bank became the largest commercial and industrial lender in the nation. The impressive growth caught the attention of others — competitors looked on with envy, shareholders with appreciation and analysts with bullish optimism. As the balance sheet grew, however, so did the broader economic importance of the bank. It became too big to fail.

Regulators missed the signs of systemic risk. A kick of the bank’s tires gave no reason to panic. But a look under the hood — specifically, at the bank’s loan-to-assets ratio and average return on loans — would have revealed a simple truth: The bank had been far too risky. The tactics that fueled its go-go years rendered the bank over exposed to sectors suffering tough economic times. Rumors soon spread that the bank was in a financially sketchy spot. It was the Titanic, without the band, to paraphrase an employee.

When the inevitable run on the bank started, regulators had no choice but to spend billions to keep the bank afloat — staving it from sinking and bringing the rest of the economy with it. Of course, a similar situation played out during the Great Recession — risky behavior by a few bad companies imposed bailout payments on the rest of us.

AI labs are similarly taking gambles that have good odds of making many of us losers. As major labs rush to release their latest models, they are not stopping to ask if we have the social safety nets ready if things backfire. Nor are they meaningfully contributing to building those necessary safeguards.

Instead, we find ourselves in a highly volatile situation. Our stock market seemingly pivots on earnings of just a few companies — the world came to a near standstill last month as everyone awaited Nvidia’s financial outlook. Our leading businesses and essential government services are quick to adopt the latest AI models despite real uncertainty as to whether they will operate as intended. If any of these labs took a financial tumble or any of the models were significantly flawed, the public would likely again be asked to find a way to save the risk takers.

This outcome may be likely but it’s not inevitable. The Dodd-Frank Act passed in response to the Great Recession and intended to prevent another Too Big to Fail situation in the financial sector has been roundly criticized for its inadequacy. We should learn from its faults in thinking through how to make sure AI goliaths don’t crush all of us Davids.

Some sample steps include mandating and enforcing more rigorous testing of AI models before deployment. It would also behoove us to prevent excessive reliance on any one model by the government — this could be accomplished by requiring public service providers to maintain analog processes in the event of emergencies. Finally, we can reduce the economic sway of a few labs by fostering more competition in the space.

Too Big to Fail scenarios have happened on too many occasions. There’s no excuse for allowing AI labs to become so large and so essential that we collectively end up paying for their mistakes.


Read More

A person on their phone, using a type of artificial intelligence.

AI-generated “nudification” is no longer a distant threat—it’s harming students now. As deepfake pornography spreads in schools nationwide, educators are left to confront a growing crisis that outpaces laws, platforms, and parental awareness.

Getty Images, d3sign

How AI Deepfakes in Classrooms Expose a Crisis of Accountability and Civic Trust

While public outrage flares when AI tools like Elon Musk’s Grok generate sexualized images of adults on X—often without consent—schools have been dealing with this harm for years. For school-aged children, AI-generated “nudification” is not a future threat or an abstract tech concern; it is already shaping their daily lives.

Last month, that reality became impossible to ignore in Lafourche Parish, Louisiana. A father sued the school district after several middle school boys circulated AI-generated pornographic images of eight female classmates, including his 13-year-old daughter. When the girl confronted one of the boys and punched him on a school bus, she was expelled. The boy who helped create and spread the images faced no formal consequences.

Keep ReadingShow less
Democracies Don’t Collapse in Silence; They Collapse When Truth Is Distorted or Denied
a remote control sitting in front of a television
Photo by Pinho . on Unsplash

Democracies Don’t Collapse in Silence; They Collapse When Truth Is Distorted or Denied

Even with the full protection of the First Amendment, the free press in America is at risk. When a president works tirelessly to silence journalists, the question becomes unavoidable: What truth is he trying to keep the country from seeing? What is he covering up or trying to hide?

Democracies rarely fall in a single moment; they erode through a thousand small silences that go unchallenged. When citizens can no longer see or hear the truth — or when leaders manipulate what the public is allowed to know — the foundation of self‑government begins to crack long before the structure falls. When truth becomes negotiable, democracy becomes vulnerable — not because citizens stop caring, but because they stop receiving the information they need to act.

Keep ReadingShow less
A close up of a person's hands typing on a laptop.

As AI reshapes the labor market, workers must think like entrepreneurs. Explore skills gaps, apprenticeships, and policy reforms shaping the future of work.

Getty Images, Maria Korneeva

We’re All Entrepreneurs Now: Learning, Pivoting, and Thriving the Age of AI

What do a recent grad, a disenchanted employee, and a parent returning to the workforce all have in common? They’re each trying to determine which skills are in demand and how they can convince employers that they are competent in those fields. This is easier said than done.

Recent grads point to transcripts lined with As to persuade firms that they can add value. Firms, well aware of grade inflation, may scoff.

Keep ReadingShow less
President Trump Should Put America’s AI Interests First
A close up of a blue eyeball in the dark
Photo by Luke Jones on Unsplash

President Trump Should Put America’s AI Interests First

In some ways, the second Trump presidency has been as expected–from border security to reducing the size and scope of the federal government.

In other ways, the president has not delivered on a key promise to the MAGA base. Rather than waging a war against Silicon Valley’s influence in American politics, the administration has, by and large, done what Big Tech wants–despite its long history of anti-Trumpism in the most liberal corners of San Francisco. Not only are federal agencies working in sync with Amazon, OpenAI, and Palantir, but the president has carved out key alliances with Mark Zuckerberg, Jensen Huang, and other AI evangelists to promote AI dominance at all costs.

Keep ReadingShow less