Skip to content
Search

Latest Stories

Top Stories

We need to address the ‘pacing problem’ before AI gets out of control

artificial intelligence

If we can use our regulatory imaginations, writers Frazier, "then there’s a chance that future surges in technology can be directed to align with the public interest."

Surasak Suwanmake/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University. He previously clerked for the Montana Supreme Court.

The "pacing problem" is the most worrying phenomenon you've never heard of but already understand. In short, it refers to technological advances outpacing laws and regulations. It's as easy to observe as a streaker at a football game.

Here's a quick summary: It took 30 years from the introduction of electricity for 10 percent of households to be able to turn on the lights; 25 years for the same percentage of Americans to be able to pick up the phone; about five years for the internet to hit that mark; and, seemingly, about five weeks for ChatGPT to spread around the world.

Ask any high schooler and they’ll tell you that a longer deadline will lead to a better grade. Well, what’s true of juniors and seniors is true of senators and House members – they can develop better policies when they have more time to respond to an emerging technology. The pacing problem, though, robs our elected officials of the time to ponder how best to regulate something like artificial intelligence: As the rate of adoption increases, the window for action shrinks.

Sign up for The Fulcrum newsletter


A little more than a year out from the release of ChatGPT, it’s already clear that generative AI tools have become entrenched in society. Lawyers are attempting to use it. Students are hoping to rely on it. And, of course, businesses are successfully exploiting it to increase their bottom lines. As a result, any attempt by Congress to regulate AI will be greeted by an ever expanding and well-paid army of advocates who want to make sure AI is only regulated in a way that doesn’t inhibit their client’s use of the novel technology.

ChatGPT is the beginning of the Age of AI. Another wave of transformational technologies is inevitable. What’s uncertain is whether we will recognize the need for some regulatory imagination. If we stick with the status quo – governance by a Congress operated by expert fundraisers more so than expert policymakers – then the pacing problem will only get worse. If we instead opt to use our regulatory imaginations, then there’s a chance that future surges in technology can be directed to align with the public interest.

Regulatory imagination is like a pink pony – theoretically, easy to spot; in reality, difficult to create. The first step is to encourage our regulators to dream big. One small step toward that goal: Create an innovation team within each agency. These teams would have a mandate to study how the sausage is made and analyze and share ways to make that process faster, smarter and more responsive to changes in technology.

The second step would be to embrace experimentation. Congress currently operates like someone trying to break the home run record – they only take big swings and they commonly miss. A wiser strategy would be to bunt and see if we can get any runners in scoring position; in other words, Congress should lean into testing novel policy ideas by passing laws with sunset clauses. Laws with expiration dates would increase Congress’ willingness to test new ideas and monitor their effectiveness.

Third, and finally, Congress should work more closely with the leading developers of emerging technologies. Case in point, Americans would benefit from AI labs like OpenAI and Google being more transparent with Congress about what technology they plan to release and when. Surprise announcements may please stakeholders but companies should instead aim to minimize their odds of disrupting society. This sort of information sharing, even if not made public, could go a long way toward closing the pacing problem.

Technological “progress” does not always move society forward. We’ve got to address the pacing problem if advances in technology are going to serve the common good.

Read More

The American Schism in 2025: The New Cultural Revolution

A street vendor selling public domain Donald Trump paraphernalia and souvenirs. The souvenirs are located right across the street from the White House and taken on the afternoon of July 21, 2019 near Pennslyvania Avenue in Washington, D.C.

Getty Images, P_Wei

The American Schism in 2025: The New Cultural Revolution

A common point of bewilderment today among many of Trump’s “establishment” critics is the all too tepid response to Trump’s increasingly brazen shattering of democratic norms. True, he started this during his first term, but in his second, Trump seems to relish the weaponization of his presidency to go after his enemies and to brandish his corrupt dealings, all under the Trump banner (e.g. cyber currency, Mideast business dealings, the Boeing 747 gift from Qatar). Not only does Trump conduct himself with impunity but Fox News and other mainstream media outlets barely cover them at all. (And when left-leaning media do, the interest seems to wane quickly.)

Here may be the source of the puzzlement: the left intelligentsia continues to view and characterize MAGA as a political movement, without grasping its transcendence into a new dominant cultural order. MAGA rose as a counter-establishment partisan drive during Trump’s 2016 campaign and subsequent first administration; however, by the 2024 election, it became evident that MAGA was but the eye of a full-fledged cultural shift, in some ways akin to Mao’s Cultural Revolution.

Keep ReadingShow less
Should States Regulate AI?

Rep. Jay Obernolte, R-CA, speaks at an AI conference on Capitol Hill with experts

Provided

Should States Regulate AI?

WASHINGTON —- As House Republicans voted Thursday to pass a 10-year moratorium on AI regulation by states, Rep. Jay Obernolte, R-CA, and AI experts said the measure would be necessary to ensure US dominance in the industry.

“We want to make sure that AI continues to be led by the United States of America, and we want to make sure that our economy and our society realizes the potential benefits of AI deployment,” Obernolte said.

Keep ReadingShow less
The AI Race We Need: For a Better Future, Not Against Another Nation

The concept of AI hovering among the public.

Getty Images, J Studios

The AI Race We Need: For a Better Future, Not Against Another Nation

The AI race that warrants the lion’s share of our attention and resources is not the one with China. Both superpowers should stop hurriedly pursuing AI advances for the sake of “beating” the other. We’ve seen such a race before. Both participants lose. The real race is against an unacceptable status quo: declining lifespans, increasing income inequality, intensifying climate chaos, and destabilizing politics. That status quo will drag on, absent the sorts of drastic improvements AI can bring about. AI may not solve those problems but it may accelerate our ability to improve collective well-being. That’s a race worth winning.

Geopolitical races have long sapped the U.S. of realizing a better future sooner. The U.S. squandered scarce resources and diverted talented staff to close the alleged missile gap with the USSR. President Dwight D. Eisenhower rightfully noted, “Every gun that is made, every warship launched, every rocket fired signifies, in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed.” He realized that every race comes at an immense cost. In this case, the country was “spending the sweat of its laborers, the genius of its scientists, the hopes of its children.”

Keep ReadingShow less
Closeup of Software engineering team engaged in problem-solving and code analysis

Closeup of Software engineering team engaged in problem-solving and code analysis.

Getty Images, MTStock Studio

AI Is Here. Our Laws Are Stuck in the Past.

Artificial intelligence (AI) promises a future once confined to science fiction: personalized medicine accounting for your specific condition, accelerated scientific discovery addressing the most difficult challenges, and reimagined public education designed around AI tutors suited to each student's learning style. We see glimpses of this potential on a daily basis. Yet, as AI capabilities surge forward at exponential speed, the laws and regulations meant to guide them remain anchored in the twentieth century (if not the nineteenth or eighteenth!). This isn't just inefficient; it's dangerously reckless.

For too long, our approach to governing new technologies, including AI, has been one of cautious incrementalism—trying to fit revolutionary tools into outdated frameworks. We debate how century-old privacy torts apply to vast AI training datasets, how liability rules designed for factory machines might cover autonomous systems, or how copyright law conceived for human authors handles AI-generated creations. We tinker around the edges, applying digital patches to analog laws.

Keep ReadingShow less