Skip to content
Search

Latest Stories

Follow Us:
Top Stories

We need to address the ‘pacing problem’ before AI gets out of control

Opinion

artificial intelligence

If we can use our regulatory imaginations, writers Frazier, "then there’s a chance that future surges in technology can be directed to align with the public interest."

Surasak Suwanmake/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University. He previously clerked for the Montana Supreme Court.

The "pacing problem" is the most worrying phenomenon you've never heard of but already understand. In short, it refers to technological advances outpacing laws and regulations. It's as easy to observe as a streaker at a football game.

Here's a quick summary: It took 30 years from the introduction of electricity for 10 percent of households to be able to turn on the lights; 25 years for the same percentage of Americans to be able to pick up the phone; about five years for the internet to hit that mark; and, seemingly, about five weeks for ChatGPT to spread around the world.

Ask any high schooler and they’ll tell you that a longer deadline will lead to a better grade. Well, what’s true of juniors and seniors is true of senators and House members – they can develop better policies when they have more time to respond to an emerging technology. The pacing problem, though, robs our elected officials of the time to ponder how best to regulate something like artificial intelligence: As the rate of adoption increases, the window for action shrinks.


A little more than a year out from the release of ChatGPT, it’s already clear that generative AI tools have become entrenched in society. Lawyers are attempting to use it. Students are hoping to rely on it. And, of course, businesses are successfully exploiting it to increase their bottom lines. As a result, any attempt by Congress to regulate AI will be greeted by an ever expanding and well-paid army of advocates who want to make sure AI is only regulated in a way that doesn’t inhibit their client’s use of the novel technology.

ChatGPT is the beginning of the Age of AI. Another wave of transformational technologies is inevitable. What’s uncertain is whether we will recognize the need for some regulatory imagination. If we stick with the status quo – governance by a Congress operated by expert fundraisers more so than expert policymakers – then the pacing problem will only get worse. If we instead opt to use our regulatory imaginations, then there’s a chance that future surges in technology can be directed to align with the public interest.

Regulatory imagination is like a pink pony – theoretically, easy to spot; in reality, difficult to create. The first step is to encourage our regulators to dream big. One small step toward that goal: Create an innovation team within each agency. These teams would have a mandate to study how the sausage is made and analyze and share ways to make that process faster, smarter and more responsive to changes in technology.

The second step would be to embrace experimentation. Congress currently operates like someone trying to break the home run record – they only take big swings and they commonly miss. A wiser strategy would be to bunt and see if we can get any runners in scoring position; in other words, Congress should lean into testing novel policy ideas by passing laws with sunset clauses. Laws with expiration dates would increase Congress’ willingness to test new ideas and monitor their effectiveness.

Third, and finally, Congress should work more closely with the leading developers of emerging technologies. Case in point, Americans would benefit from AI labs like OpenAI and Google being more transparent with Congress about what technology they plan to release and when. Surprise announcements may please stakeholders but companies should instead aim to minimize their odds of disrupting society. This sort of information sharing, even if not made public, could go a long way toward closing the pacing problem.

Technological “progress” does not always move society forward. We’ve got to address the pacing problem if advances in technology are going to serve the common good.

Read More

The concept of AI hovering among the public.

Panic-driven legislation—from airline safety to AI bans—often backfires, and evidence must guide policy.

Getty Images, J Studios

Beware of Panic Policies

"As far as human nature is concerned, with panic comes irrationality." This simple statement by Professor Steve Calandrillo and Nolan Anderson has profound implications for public policy. When panic is highest, and demand for reactive policy is greatest, that's exactly when we need our lawmakers to resist the temptation to move fast and ban things. Yet, many state legislators are ignoring this advice amid public outcries about the allegedly widespread and destructive uses of AI. Thankfully, Calandrillo and Anderson have identified a few examples of what I'll call "panic policies" that make clear that proposals forged by frenzy tend not to reflect good public policy.

Let's turn first to a proposal in November of 2001 from the American Academy of Pediatrics (AAP). For obvious reasons, airline safety was subject to immense public scrutiny at this time. AAP responded with what may sound like a good idea: require all infants to have their own seat and, by extension, their own seat belt on planes. The existing policy permitted parents to simply put their kid--so long as they were under two--on their lap. Essentially, babies flew for free.

The Federal Aviation Administration (FAA) permitted this based on a pretty simple analysis: the risks to young kids without seatbelts on planes were far less than the risks they would face if they were instead traveling by car. Put differently, if parents faced higher prices to travel by air, then they'd turn to the road as the best way to get from A to B. As we all know (perhaps with the exception of the AAP at the time), airline travel is tremendously safer than travel by car. Nevertheless, the AAP forged ahead with its proposal. In fact, it did so despite admitting that they were unsure of whether the higher risks of mortality of children under two in plane crashes were due to the lack of a seat belt or the fact that they're simply fragile.

Keep ReadingShow less
Will Generative AI Robots Replace Surgeons?

Generative AI and surgical robotics are advancing toward autonomous surgery, raising new questions about safety, regulation, payment models, and trust.

Getty Images, Luis Alvarez

Will Generative AI Robots Replace Surgeons?

In medicine’s history, the best technologies didn’t just improve clinical practice. They turned traditional medicine on its head.

For example, advances like CT, MRI, and ultrasound machines did more than merely improve diagnostic accuracy. They diminished the importance of the physical exam and the physicians who excelled at it.

Keep ReadingShow less
Digital Footprints Are Affecting This New Generation of Politicians, but Do Voters Care?

Hand holding smart phone with US flag case

Credit: Katareena Roska

Digital Footprints Are Affecting This New Generation of Politicians, but Do Voters Care?

WASHINGTON — In 2022, Jay Jones sent text messages to a former colleague about a senior state Republican in Virginia getting “two bullets to the head.”

When the texts were shared by his colleague a month before the Virginia general election, Jones, the Democratic candidate for attorney general, was slammed for the violent rhetoric. Winsome Earle-Sears, the Republican candidate for governor, called for Jones to withdraw from the race.

Keep ReadingShow less
A U.S. flag flying before congress. Visual representation of technology, a glitch, artificial intelligence
As AI reshapes jobs and politics, America faces a choice: resist automation or embrace innovation. The path to prosperity lies in AI literacy and adaptability.
Getty Images, Douglas Rissing

America’s Unnamed Crisis

I first encountered Leszek Kołakowski, the Polish political thinker, as an undergraduate. It was he who warned of “an all-encompassing crisis” that societies can feel but cannot clearly name. His insight reads less like a relic of the late 1970s and more like a dispatch from our own political moment. We aren’t living through one breakdown, but a cascade of them—political, social, and technological—each amplifying the others. The result is a country where people feel burnt out, anxious, and increasingly unsure of where authority or stability can be found.

This crisis doesn’t have a single architect. Liberals can’t blame only Trump, and conservatives can’t pin everything on "wokeness." What we face is a convergence of powerful forces: decades of institutional drift, fractures in civic life, and technologies that reward emotions over understanding. These pressures compound one another, creating a sense of disorientation that older political labels fail to describe with the same accuracy as before.

Keep ReadingShow less