Skip to content
Search

Latest Stories

Top Stories

Is AI too big to fail?

Nvidia building and logo

The world came to a near standstill last month as everyone awaited Nvidia’s financial outlook.

Cheng Xin/Getty Images

This is the first entry in “ Big Tech and Democracy,” a series designed to assist American citizens in understanding the impact technology is having — and will have — on our democracy. The series will explore the benefits and risks that lie ahead and offer possible solutions.

In the span of two or so years, OpenAI, Nvidia and a handful of other companies essential to the development of artificial intelligence have become economic behemoths. Their valuations and stock prices have soared. Their products have become essential to Fortune 500 companies. Their business plans are the focus of the national security industry. Their collapse would be, well, unacceptable. They are too big to fail.

The good news is we’ve been in similar situations before. The bad news is we’ve yet to really learn our lesson.


In the mid-1970s, a bank known for its conservative growth strategy decided to more aggressively pursue profits. The strategy worked. In just a few years the bank became the largest commercial and industrial lender in the nation. The impressive growth caught the attention of others — competitors looked on with envy, shareholders with appreciation and analysts with bullish optimism. As the balance sheet grew, however, so did the broader economic importance of the bank. It became too big to fail.

Regulators missed the signs of systemic risk. A kick of the bank’s tires gave no reason to panic. But a look under the hood — specifically, at the bank’s loan-to-assets ratio and average return on loans — would have revealed a simple truth: The bank had been far too risky. The tactics that fueled its “go-go” years rendered the bank over exposed to sectors suffering tough economic times. Rumors soon spread that the bank was in a financially sketchy spot. It was the Titanic, without the band, to paraphrase an employee.

When the inevitable run on the bank started, regulators had no choice but to spend billions on keeping the bank afloat — saving it from sinking and bringing the rest of the economy with it. Of course, a similar situation played out during the Great Recession — risky behavior by a few bad companies imposed bailout payments on the rest of us.

AI labs are similarly taking gambles that have good odds of making many of us losers. As major labs rush to release their latest models, they are not stopping to ask if we have the social safety nets ready if things backfire. Nor are they meaningfully contributing to building those necessary safeguards. Instead, we find ourselves in a highly volatile situation. Our stock market seemingly pivots on earnings of just a few companies — the world came to a near standstill last month as everyone awaited Nvidia’s financial outlook. Our leading businesses and essential government services are quick to adopt the latest AI models despite real uncertainty as to whether they will operate as intended. If any of these labs took a financial tumble or any of the models were significantly flawed, the public would likely again be asked to find a way to save the risk takers.

This outcome may be likely but it’s not inevitable. The Dodd Frank Act passed in response to the Great Recession and intended to prevent another Too Big to Fail situation in the financial sector has been roundly criticized for its inadequacy. We should learn from its faults in thinking through how to make sure AI goliaths don’t crush all of us Davids. Some sample steps include mandating and enforcing more rigorous testing of AI models before deployment. It would also behoove us to prevent excessive reliance on any one model by the government — this could be accomplished by requiring public service providers to maintain analog processes in the event of emergencies. Finally, we can reduce the economic sway of a few labs by fostering more competition in the space.

Too Big to Fail scenarios have happened on too many occasions. There’s no excuse for allowing AI labs to become so large and so essential that we collectively end up paying for their mistakes.

Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.

Read More

They’re calling her an influencer. She’s calling it campaign strategy.

Deja Foxx poses for a portrait at her home in Tucson, Arizona, on June 21, 2025. 

Courtney Pedroza for The 19th

They’re calling her an influencer. She’s calling it campaign strategy.

TUCSON, ARIZ. — On a Saturday afternoon, Deja Foxx is staging a TikTok Live in her living room. A phone tripod is set up in front of her kitchen table. The frame is centered on a slouchy sofa against an adobe wall, where a chile ristra hangs on one side.

“All right, everybody, take your seats,” she tells the mix of young volunteers, family members and campaign staff who are gathered to help her. “You have some really great mail to open, and I’m so excited because usually it’s just me and my mom that do this.”

Keep ReadingShow less
Ten Things the Future Will Say We Got Wrong About AI

A team of

Getty Images, Dragos Condrea

Ten Things the Future Will Say We Got Wrong About AI

As we look back on 1776 after this July 4th holiday, it's a good opportunity to skip forward and predict what our forebears will think of us. When our descendants assess our policies, ideas, and culture, what will they see? What errors, born of myopia, inertia, or misplaced priorities, will they lay at our feet regarding today's revolutionary technology—artificial intelligence? From their vantage point, with AI's potential and perils laid bare, their evaluation will likely determine that we got at least ten things wrong.

One glaring failure will be our delay in embracing obviously superior AI-driven technologies like autonomous vehicles (AVs). Despite the clear safety benefits—tens of thousands of lives saved annually, reduced congestion, enhanced accessibility—we allowed a patchwork of outdated regulations, public apprehension, and corporate squabbling to keep these life-saving machines largely off our roads. The future will see our hesitation as a moral and economic misstep, favoring human error over demonstrated algorithmic superiority.

Keep ReadingShow less
I Fought To Keep VOA Independent. Now It’s Gone.

A Voice of America sign is displayed outside of their headquarters at the Wilbur J. Cohen Federal Building on June 17, 2025 in Washington, DC.

(Photo by Kevin Carter/Getty Images)

I Fought To Keep VOA Independent. Now It’s Gone.

The Trump administration has accomplished something that Hitler, Stalin, Mao, and other dictators desired. It destroyed the Voice of America.

Until mid-March, VOA had been on the air continuously for 83 years. Starting in 1942 with shortwave broadcasts in German to counter Nazi propaganda, America’s external voice had expanded to nearly 50 languages, with a weekly combined audience of more than 350 million people worldwide, watching on TV, listening on radio, with a weekly combined audience of more than 350 million people around the world watching on TV, listening on radio or viewing its content online or through social media apps.

Keep ReadingShow less
Just the Facts: Digital Services Tax
people sitting down near table with assorted laptop computers
Photo by Marvin Meyer on Unsplash

Just the Facts: Digital Services Tax

President Donald Trump said on Friday that he has ended trade talks with Canada and will soon announce a new tariff rate for that country, as stated in a Truth Social post.

The decision to end the months-long negotiations came after Canada announced a digital service tax (DST) that Trump called “a direct and blatant attack on our Country.”

Keep ReadingShow less