Skip to content
Search

Latest Stories

Top Stories

We need to address the ‘pacing problem’ before AI gets out of control

artificial intelligence

If we can use our regulatory imaginations, writers Frazier, "then there’s a chance that future surges in technology can be directed to align with the public interest."

Surasak Suwanmake/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University. He previously clerked for the Montana Supreme Court.

The "pacing problem" is the most worrying phenomenon you've never heard of but already understand. In short, it refers to technological advances outpacing laws and regulations. It's as easy to observe as a streaker at a football game.

Here's a quick summary: It took 30 years from the introduction of electricity for 10 percent of households to be able to turn on the lights; 25 years for the same percentage of Americans to be able to pick up the phone; about five years for the internet to hit that mark; and, seemingly, about five weeks for ChatGPT to spread around the world.

Ask any high schooler and they’ll tell you that a longer deadline will lead to a better grade. Well, what’s true of juniors and seniors is true of senators and House members – they can develop better policies when they have more time to respond to an emerging technology. The pacing problem, though, robs our elected officials of the time to ponder how best to regulate something like artificial intelligence: As the rate of adoption increases, the window for action shrinks.


A little more than a year out from the release of ChatGPT, it’s already clear that generative AI tools have become entrenched in society. Lawyers are attempting to use it. Students are hoping to rely on it. And, of course, businesses are successfully exploiting it to increase their bottom lines. As a result, any attempt by Congress to regulate AI will be greeted by an ever expanding and well-paid army of advocates who want to make sure AI is only regulated in a way that doesn’t inhibit their client’s use of the novel technology.

ChatGPT is the beginning of the Age of AI. Another wave of transformational technologies is inevitable. What’s uncertain is whether we will recognize the need for some regulatory imagination. If we stick with the status quo – governance by a Congress operated by expert fundraisers more so than expert policymakers – then the pacing problem will only get worse. If we instead opt to use our regulatory imaginations, then there’s a chance that future surges in technology can be directed to align with the public interest.

Regulatory imagination is like a pink pony – theoretically, easy to spot; in reality, difficult to create. The first step is to encourage our regulators to dream big. One small step toward that goal: Create an innovation team within each agency. These teams would have a mandate to study how the sausage is made and analyze and share ways to make that process faster, smarter and more responsive to changes in technology.

The second step would be to embrace experimentation. Congress currently operates like someone trying to break the home run record – they only take big swings and they commonly miss. A wiser strategy would be to bunt and see if we can get any runners in scoring position; in other words, Congress should lean into testing novel policy ideas by passing laws with sunset clauses. Laws with expiration dates would increase Congress’ willingness to test new ideas and monitor their effectiveness.

Third, and finally, Congress should work more closely with the leading developers of emerging technologies. Case in point, Americans would benefit from AI labs like OpenAI and Google being more transparent with Congress about what technology they plan to release and when. Surprise announcements may please stakeholders but companies should instead aim to minimize their odds of disrupting society. This sort of information sharing, even if not made public, could go a long way toward closing the pacing problem.

Technological “progress” does not always move society forward. We’ve got to address the pacing problem if advances in technology are going to serve the common good.

Read More

shallow focus photography of computer codes
Shahadat Rahman on Unsplash

When Rules Can Be Code, They Should Be!

Ninety years ago this month, the Federal Register Act was signed into law in a bid to shine a light on the rules driving President Franklin Roosevelt’s New Deal—using the best tools of the time to make government more transparent and accountable. But what began as a bold step toward clarity has since collapsed under its own weight: over 100,000 pages, a million rules, and a public lost in a regulatory haystack. Today, the Trump administration’s sweeping push to cut red tape—including using AI to hunt obsolete rules—raises a deeper challenge: how do we prevent bureaucracy from rebuilding itself?

What’s needed is a new approach: rewriting the rule book itself as machine-executable code that can be analyzed, implemented, or streamlined at scale. Businesses could simply download and execute the latest regulations on their systems, with no need for costly legal analysis and compliance work. Individuals could use apps or online tools to quickly figure out how rules affect them.

Keep ReadingShow less
Microchip labeled "AI"
Preparing for an inevitable AI emergency
Eugene Mymrin/Getty Images

Nvidia and AMD’s China Chip Deal Sets Dangerous Precedent in U.S. Industrial Policy

This morning’s announcement that Nvidia and AMD will resume selling AI chips to China on the condition that they surrender 15% of their revenue from those sales to the U.S. government marks a jarring inflection point in American industrial policy.

This is not just a transaction workaround for a particular situation. This is a major philosophical government policy shift.

Keep ReadingShow less
Doctor using AI technology
Akarapong Chairean/Getty Images

Generative AI Can Save Lives: Two Diverging Paths In Medicine

Generative AI is advancing at breakneck speed. Already, it’s outperforming doctors on national medical exams and in making difficult diagnoses. Microsoft recently reported that its latest AI system correctly diagnosed complex medical cases 85.5% of the time, compared to just 20% for physicians. OpenAI’s newly released GPT-5 model goes further still, delivering its most accurate and responsive performance yet on health-related queries.

As GenAI tools double in power annually, two distinct approaches are emerging for how they might help patients.

Keep ReadingShow less