Skip to content
Search

Latest Stories

Follow Us:
Top Stories

We may face another 'too big to fail' scenario as AI labs go unchecked

NVIDIA headquarters

Our stock market pivots on the performance of a handful of AI-focused companies like Nvidia.

hapabapa/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.

In the span of two or so years, OpenAI, Nvidia and a handful of other companies essential to the development of artificial intelligence have become economic behemoths. Their valuations and stock prices have soared. Their products have become essential to Fortune 500 companies. Their business plans are the focus of the national security industry. Their collapse would be, well, unacceptable. They are too big to fail.

The good news is we’ve been in similar situations before. The bad news is we’ve yet to really learn our lesson.


In the mid-1970s, a bank known for its conservative growth strategy decided to more aggressively pursue profits. The strategy worked. In just a few years the bank became the largest commercial and industrial lender in the nation. The impressive growth caught the attention of others — competitors looked on with envy, shareholders with appreciation and analysts with bullish optimism. As the balance sheet grew, however, so did the broader economic importance of the bank. It became too big to fail.

Regulators missed the signs of systemic risk. A kick of the bank’s tires gave no reason to panic. But a look under the hood — specifically, at the bank’s loan-to-assets ratio and average return on loans — would have revealed a simple truth: The bank had been far too risky. The tactics that fueled its go-go years rendered the bank over exposed to sectors suffering tough economic times. Rumors soon spread that the bank was in a financially sketchy spot. It was the Titanic, without the band, to paraphrase an employee.

When the inevitable run on the bank started, regulators had no choice but to spend billions to keep the bank afloat — staving it from sinking and bringing the rest of the economy with it. Of course, a similar situation played out during the Great Recession — risky behavior by a few bad companies imposed bailout payments on the rest of us.

AI labs are similarly taking gambles that have good odds of making many of us losers. As major labs rush to release their latest models, they are not stopping to ask if we have the social safety nets ready if things backfire. Nor are they meaningfully contributing to building those necessary safeguards.

Instead, we find ourselves in a highly volatile situation. Our stock market seemingly pivots on earnings of just a few companies — the world came to a near standstill last month as everyone awaited Nvidia’s financial outlook. Our leading businesses and essential government services are quick to adopt the latest AI models despite real uncertainty as to whether they will operate as intended. If any of these labs took a financial tumble or any of the models were significantly flawed, the public would likely again be asked to find a way to save the risk takers.

This outcome may be likely but it’s not inevitable. The Dodd-Frank Act passed in response to the Great Recession and intended to prevent another Too Big to Fail situation in the financial sector has been roundly criticized for its inadequacy. We should learn from its faults in thinking through how to make sure AI goliaths don’t crush all of us Davids.

Some sample steps include mandating and enforcing more rigorous testing of AI models before deployment. It would also behoove us to prevent excessive reliance on any one model by the government — this could be accomplished by requiring public service providers to maintain analog processes in the event of emergencies. Finally, we can reduce the economic sway of a few labs by fostering more competition in the space.

Too Big to Fail scenarios have happened on too many occasions. There’s no excuse for allowing AI labs to become so large and so essential that we collectively end up paying for their mistakes.

Read More

child holding smartphone

As Australia bans social media for kids under 16, U.S. parents face a harder truth: online safety isn’t an individual choice; it’s a collective responsibility.

Getty Images/Keiko Iwabuchi

Parents Must Quit Infighting to Keep Kids Safe Online

Last week, Australia’s social media ban for children under age 16 officially took effect. It remains to be seen how this law will shape families' behavior; however, it’s at least a stand against the tech takeover of childhood. Here in the U.S., however, we're in a different boat — a consensus on what's best for kids feels much harder to come by among both lawmakers and parents.

In order to make true progress on this issue, we must resist the fallacy of parental individualism – that what you choose for your own child is up to you alone. That it’s a personal, or family, decision to allow smartphones, or certain apps, or social media. But it’s not a personal decision. The choice you make for your family and your kids affects them and their friends, their friends' siblings, their classmates, and so on. If there is no general consensus around parenting decisions when it comes to tech, all kids are affected.

Keep ReadingShow less
Someone wrapping a gift.

As screens replace toys, childhood is being gamified. What this shift means for parents, play, development, and holiday gift-giving.

Getty Images, Oscar Wong

The Christmas When Toys Died: The Playtime Paradigm Shift Retailers Failed to See Coming

Something is changing this Christmas, and parents everywhere are feeling it. Bedrooms overflow with toys no one touches, while tablets steal the spotlight, pulling children as young as five into digital worlds that retailers are slow to recognize. The shift is quiet but unmistakable, and many parents are left wondering what toy purchases even make sense anymore.

Research shows that higher screen time correlates with significantly lower engagement in other play activities, mainly traditional, physical, unstructured play. It suggests screen-based play is displacing classic play with traditional toys. Families are experiencing in real time what experts increasingly describe as the rise of “gamified childhoods.”

Keep ReadingShow less
Affordability Crisis and AI: Kelso’s Universal Capitalism

Rising costs, AI disruption, and inequality revive interest in Louis Kelso’s “universal capitalism” as a market-based answer to the affordability crisis.

Getty Images, J Studios

Affordability Crisis and AI: Kelso’s Universal Capitalism

“Affordability” over the cost of living has been in the news a lot lately. It’s popping up in political campaigns, from the governor’s races in New Jersey and Virginia to the mayor’s races in New York City and Seattle. President Donald Trump calls the term a “hoax” and a “con job” by Democrats, and it’s true that the inflation rate hasn’t increased much since Trump began his second term in January.

But a number of reports show Americans are struggling with high costs for essentials like food, housing, and utilities, leaving many families feeling financially pinched. Total consumer spending over the Black Friday-Thanksgiving weekend buying binge actually increased this year, but a Salesforce study found that’s because prices were about 7% higher than last year’s blitz. Consumers actually bought 2% fewer items at checkout.

Keep ReadingShow less
Censorship Should Be Obsolete by Now. Why Isn’t It?

US Capital with tech background

Greggory DiSalvo/Getty Images

Censorship Should Be Obsolete by Now. Why Isn’t It?

Techies, activists, and academics were in Paris this month to confront the doom scenario of internet shutdowns, developing creative technology and policy solutions to break out of heavily censored environments. The event– SplinterCon– has previously been held globally, from Brussels to Taiwan. I am on the programme committee and delivered a keynote at the inaugural SplinterCon in Montreal on how internet standards must be better designed for censorship circumvention.

Censorship and digital authoritarianism were exposed in dozens of countries in the recently published Freedom on the Net report. For exampl,e Russia has pledged to provide “sovereign AI,” a strategy that will surely extend its network blocks on “a wide array of social media platforms and messaging applications, urging users to adopt government-approved alternatives.” The UK joined Vietnam, China, and a growing number of states requiring “age verification,” the use of government-issued identification cards, to access internet services, which the report calls “a crisis for online anonymity.”

Keep ReadingShow less