Skip to content
Search

Latest Stories

Top Stories

We may face another 'too big to fail' scenario as AI labs go unchecked

NVIDIA headquarters

Our stock market pivots on the performance of a handful of AI-focused companies like Nvidia.

hapabapa/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.

In the span of two or so years, OpenAI, Nvidia and a handful of other companies essential to the development of artificial intelligence have become economic behemoths. Their valuations and stock prices have soared. Their products have become essential to Fortune 500 companies. Their business plans are the focus of the national security industry. Their collapse would be, well, unacceptable. They are too big to fail.

The good news is we’ve been in similar situations before. The bad news is we’ve yet to really learn our lesson.


In the mid-1970s, a bank known for its conservative growth strategy decided to more aggressively pursue profits. The strategy worked. In just a few years the bank became the largest commercial and industrial lender in the nation. The impressive growth caught the attention of others — competitors looked on with envy, shareholders with appreciation and analysts with bullish optimism. As the balance sheet grew, however, so did the broader economic importance of the bank. It became too big to fail.

Regulators missed the signs of systemic risk. A kick of the bank’s tires gave no reason to panic. But a look under the hood — specifically, at the bank’s loan-to-assets ratio and average return on loans — would have revealed a simple truth: The bank had been far too risky. The tactics that fueled its go-go years rendered the bank over exposed to sectors suffering tough economic times. Rumors soon spread that the bank was in a financially sketchy spot. It was the Titanic, without the band, to paraphrase an employee.

When the inevitable run on the bank started, regulators had no choice but to spend billions to keep the bank afloat — staving it from sinking and bringing the rest of the economy with it. Of course, a similar situation played out during the Great Recession — risky behavior by a few bad companies imposed bailout payments on the rest of us.

AI labs are similarly taking gambles that have good odds of making many of us losers. As major labs rush to release their latest models, they are not stopping to ask if we have the social safety nets ready if things backfire. Nor are they meaningfully contributing to building those necessary safeguards.

Instead, we find ourselves in a highly volatile situation. Our stock market seemingly pivots on earnings of just a few companies — the world came to a near standstill last month as everyone awaited Nvidia’s financial outlook. Our leading businesses and essential government services are quick to adopt the latest AI models despite real uncertainty as to whether they will operate as intended. If any of these labs took a financial tumble or any of the models were significantly flawed, the public would likely again be asked to find a way to save the risk takers.

This outcome may be likely but it’s not inevitable. The Dodd-Frank Act passed in response to the Great Recession and intended to prevent another Too Big to Fail situation in the financial sector has been roundly criticized for its inadequacy. We should learn from its faults in thinking through how to make sure AI goliaths don’t crush all of us Davids.

Some sample steps include mandating and enforcing more rigorous testing of AI models before deployment. It would also behoove us to prevent excessive reliance on any one model by the government — this could be accomplished by requiring public service providers to maintain analog processes in the event of emergencies. Finally, we can reduce the economic sway of a few labs by fostering more competition in the space.

Too Big to Fail scenarios have happened on too many occasions. There’s no excuse for allowing AI labs to become so large and so essential that we collectively end up paying for their mistakes.

Read More

People on their phones. ​

In order to achieve scale, many civic efforts must also reach Americans as media consumers, where Americans currently spend much more time.

Getty Images, Xavier Lorenzo

Reaching Americans As Media Consumers – Not Only As Participants – To Improve the Political Environment

Current efforts to improve how Americans think and feel about those across the political spectrum overwhelmingly rely on participation. Participation usually involves interpersonal interaction, mostly to have dialogues or to collectively work on a project together.

These can be valuable, but in order to achieve scale, many efforts must also reach Americans as media consumers, where Americans currently spend much more time.

Keep ReadingShow less
Scams Targeting Immigrants Take Advantage of Fears of Immigration Status and Deportation

Scam incoming call alert screen on mobile phone.

Getty Images/Stock Photo

Scams Targeting Immigrants Take Advantage of Fears of Immigration Status and Deportation

WASHINGTON–When my phone rang and I saw the familiar DC area code, I picked up, and a man with a slight Indian accent said: “Ma’am, this is the Indian Embassy.”

Expecting a response from the Indian Embassy for an article I was working on, I said, “Is this in regards to my media inquiry?” He said no. He was calling about a problem with my Indian passport. I asked who he called, and when he said a name I didn’t recognize, I informed him he had the wrong person and hung up, figuring it was a scam.

Keep ReadingShow less
The American Schism in 2025: The New Cultural Revolution

A street vendor selling public domain Donald Trump paraphernalia and souvenirs. The souvenirs are located right across the street from the White House and taken on the afternoon of July 21, 2019 near Pennslyvania Avenue in Washington, D.C.

Getty Images, P_Wei

The American Schism in 2025: The New Cultural Revolution

A common point of bewilderment today among many of Trump’s “establishment” critics is the all too tepid response to Trump’s increasingly brazen shattering of democratic norms. True, he started this during his first term, but in his second, Trump seems to relish the weaponization of his presidency to go after his enemies and to brandish his corrupt dealings, all under the Trump banner (e.g. cyber currency, Mideast business dealings, the Boeing 747 gift from Qatar). Not only does Trump conduct himself with impunity but Fox News and other mainstream media outlets barely cover them at all. (And when left-leaning media do, the interest seems to wane quickly.)

Here may be the source of the puzzlement: the left intelligentsia continues to view and characterize MAGA as a political movement, without grasping its transcendence into a new dominant cultural order. MAGA rose as a counter-establishment partisan drive during Trump’s 2016 campaign and subsequent first administration; however, by the 2024 election, it became evident that MAGA was but the eye of a full-fledged cultural shift, in some ways akin to Mao’s Cultural Revolution.

Keep ReadingShow less
Should States Regulate AI?

Rep. Jay Obernolte, R-CA, speaks at an AI conference on Capitol Hill with experts

Provided

Should States Regulate AI?

WASHINGTON —- As House Republicans voted Thursday to pass a 10-year moratorium on AI regulation by states, Rep. Jay Obernolte, R-CA, and AI experts said the measure would be necessary to ensure US dominance in the industry.

“We want to make sure that AI continues to be led by the United States of America, and we want to make sure that our economy and our society realizes the potential benefits of AI deployment,” Obernolte said.

Keep ReadingShow less