Skip to content
Search

Latest Stories

Follow Us:
Top Stories

We need to address the ‘pacing problem’ before AI gets out of control

Opinion

artificial intelligence

If we can use our regulatory imaginations, writers Frazier, "then there’s a chance that future surges in technology can be directed to align with the public interest."

Surasak Suwanmake/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University. He previously clerked for the Montana Supreme Court.

The "pacing problem" is the most worrying phenomenon you've never heard of but already understand. In short, it refers to technological advances outpacing laws and regulations. It's as easy to observe as a streaker at a football game.

Here's a quick summary: It took 30 years from the introduction of electricity for 10 percent of households to be able to turn on the lights; 25 years for the same percentage of Americans to be able to pick up the phone; about five years for the internet to hit that mark; and, seemingly, about five weeks for ChatGPT to spread around the world.

Ask any high schooler and they’ll tell you that a longer deadline will lead to a better grade. Well, what’s true of juniors and seniors is true of senators and House members – they can develop better policies when they have more time to respond to an emerging technology. The pacing problem, though, robs our elected officials of the time to ponder how best to regulate something like artificial intelligence: As the rate of adoption increases, the window for action shrinks.


A little more than a year out from the release of ChatGPT, it’s already clear that generative AI tools have become entrenched in society. Lawyers are attempting to use it. Students are hoping to rely on it. And, of course, businesses are successfully exploiting it to increase their bottom lines. As a result, any attempt by Congress to regulate AI will be greeted by an ever expanding and well-paid army of advocates who want to make sure AI is only regulated in a way that doesn’t inhibit their client’s use of the novel technology.

ChatGPT is the beginning of the Age of AI. Another wave of transformational technologies is inevitable. What’s uncertain is whether we will recognize the need for some regulatory imagination. If we stick with the status quo – governance by a Congress operated by expert fundraisers more so than expert policymakers – then the pacing problem will only get worse. If we instead opt to use our regulatory imaginations, then there’s a chance that future surges in technology can be directed to align with the public interest.

Regulatory imagination is like a pink pony – theoretically, easy to spot; in reality, difficult to create. The first step is to encourage our regulators to dream big. One small step toward that goal: Create an innovation team within each agency. These teams would have a mandate to study how the sausage is made and analyze and share ways to make that process faster, smarter and more responsive to changes in technology.

The second step would be to embrace experimentation. Congress currently operates like someone trying to break the home run record – they only take big swings and they commonly miss. A wiser strategy would be to bunt and see if we can get any runners in scoring position; in other words, Congress should lean into testing novel policy ideas by passing laws with sunset clauses. Laws with expiration dates would increase Congress’ willingness to test new ideas and monitor their effectiveness.

Third, and finally, Congress should work more closely with the leading developers of emerging technologies. Case in point, Americans would benefit from AI labs like OpenAI and Google being more transparent with Congress about what technology they plan to release and when. Surprise announcements may please stakeholders but companies should instead aim to minimize their odds of disrupting society. This sort of information sharing, even if not made public, could go a long way toward closing the pacing problem.

Technological “progress” does not always move society forward. We’ve got to address the pacing problem if advances in technology are going to serve the common good.


Read More

Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate
the letters are made up of different colors

Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate

This nonpartisan policy brief, written by an ACE fellow, is republished by The Fulcrum as part of our partnership with the Alliance for Civic Engagement and our NextGen initiative — elevating student voices, strengthening civic education, and helping readers better understand democracy and public policy.

Key takeaways

  • The U.S. has no national AI liability law. Instead, a patchwork of state laws has emerged which has resulted in legal protections being dependent on where an individual resides.
  • It’s often unclear who is legally responsible when AI causes harm. This gap leaves many people with no clear path to seek help.
  • In March 2026, the White House and Congress introduced major proposals to establish a federal standard, but there is significant disagreement about whether that standard should prioritize protecting innovation or protecting people harmed by AI systems.

Background: A Patchwork of State Laws

Without a national AI law, states have been filling in the gaps on their own. The result is an uneven landscape where a person’s legal protections depend entirely on which state they live in.

Keep ReadingShow less
Teenager admiring electronic hobby robot.

Explore how China is overtaking the U.S. in the global innovation race, from electric vehicles to advanced research, and why America’s fragmented science policy, talent loss, and weak industrial strategy threaten its technological leadership.

Getty Images, Willie B. Thomas

America’s Greatest Geopolitical Blind Spot

The global hierarchy of innovation is undergoing a structural shift that Washington is dangerously slow to acknowledge. For decades, the prevailing narrative in the United States was that China was merely the "world’s factory"—a nation capable of mass-producing Western designs but inherently lacking the creative spark to invent its own. This assumption has been shattered. Today, Beijing is no longer playing catch-up; in sectors ranging from electric vehicles and next-generation nuclear power to hypersonic missiles, China is setting the pace.

The central challenge is that China has mastered the entire innovation ecosystem, while the United States has allowed its own to fracture. Innovation is not just about a "eureka" moment in a laboratory; it is a relay race that begins with basic scientific research, moves through the training of specialized talent, and ends with the large-scale commercialization of "hard tech." China is currently winning every leg of that race.

Keep ReadingShow less
An illustration of a person standing alone on a platform and looking at speech bubbles.

A bold critique of modern democracy and rising authoritarian ideas, exploring how AI-powered swarm digital democracy could redefine participation and governance.

Getty Images, Andriy Onufriyenko

The Only Radical Move Forward: Swarm Digital Democracy

We are increasingly told that democracy has failed and that its time has passed. The evidence proffered is everywhere, we are told: Gridlock, captured institutions, performative elections, a public that senses, correctly, that its voice rarely translates into real power. Into this vacuum step dystopic movements like the Dark Enlightenment and harder strains of Right-wing populism, offering a stark diagnosis and an even starker cure: Abandon the illusion of popular rule and return to forms of authority that are decisive, hierarchical, and unapologetically exclusionary. They present themselves as bold, clear-eyed, rambunctious, alive, and willing to act where others hesitate. And all to save the world from itself.

But this framing depends on a sleight of hand: It assumes that what we have been living under is, in fact, democracy, and that its failures are the failures of democracy itself. That is the first mistake.

Keep ReadingShow less
Judge's Gavel Hammer as a Symbol of Law and Order with Processor CPU AI Chip.

Elon Musk’s xAI company is challenging AI regulations in Colorado after losing in California, arguing that limits on artificial intelligence violate free speech. As Connecticut enforces its own AI law, this case could shape the future of AI regulation, corporate accountability, and constitutional rights in the United States.

Getty Images, Alexander Sikov

xAI Pushes Free Speech Theory Into New AI Lawsuits

Elon Musk's AI company, xAI, is on a legal road trip. After losing in California, it filed suit in Colorado asking a court to declare the state's artificial intelligence regulations unconstitutional. The argument is essentially the same one that already failed. Meet the new boss. Same as the old boss.

For Connecticut residents, this is not just the next state in the alphabet that has passed AI legislation. Connecticut was one of the first states in the nation to adopt an AI law, requiring companies to disclose when AI is being used in critical decisions like employment, housing, credit, or healthcare. That law is already drawing scrutiny from the technology industry. What xAI tried to do in California and now in Colorado is a preview of what we may face in Connecticut.

Keep ReadingShow less