Skip to content
Search

Latest Stories

Follow Us:
Top Stories

We’ve Been "Preparing" for the Future Since 1991—It Hasn't Worked

Opinion

An illustration of a person standing on a giant robotic hand.

As AI transforms the labor market, the U.S. faces a familiar challenge: preparing workers for new skills. A look at a 1991 Labor Department report reveals striking parallels.

Getty Images, Andriy Onufriyenko

“Today, the demands on business and workers are different. Firms must meet world-class standards, and so must workers. Employers seek adaptability and the ability to learn and work in teams.”

Sound familiar?


It’s the sort of guidance you’ll find on X, in studies issued by nonprofits, and, as I recently dug up, a report by the Department of Labor published in 1991. The familiarity is striking—and not accidental. Periods of economic transition tend to produce the same anxieties, framed in remarkably similar language.

The Labor Secretary spun up a commission to study “the demands of the workplace and whether young people were meeting those demands.” This was an important question at that moment for a couple of reasons. First, the economic prospects of the next generation of Americans did not look bright. Unemployment among young adults stood at 9.6 percent at the start of 1991; it climbed to north of ten percent within a few months.

Second, there was a concern that the economy was transforming at a faster rate than educational curriculums. “[M]ore than half of our young people leave school without the knowledge or foundation required to find and hold a good job,” observed the Secretary. Against that backdrop, the gap between schooling and work felt urgent rather than abstract. The Secretary’s Commission on Achieving Necessary Skills (SCANS) was thus formed and mandated to talk with educators, private sector stakeholders, and government officials to identify a path forward.

We find ourselves in a similar place today. Young adults have an 8.2 percent unemployment rate as of December of 2025. Pundits, researchers, and politicians fear our educational and vocational infrastructure is ill-suited for the labor market shifts being driven by AI. The technology may be new, but the underlying worry—that institutions are lagging behind economic reality—is not.

So far, our response seems to have been the same, too.

Then, there was a lot of talking, information gathering, and stakeholder engagement. These are all practical steps, in moderation, and they often feel like progress. “We have talked to [employers] in their stores, shops, government offices, and manufacturing facilities,” explained the Secretary. “Their message to us was the same across the country and in every kind of job: good jobs depend on people who can put knowledge to work.”

That groundwork was followed by something else that will also feel familiar to modern readers: a flood of broad statements about the skills Americans would need to thrive in a new technological era.

Take it from the Secretary:

New workers must be creative and responsible problem solvers and have the skills and attitudes on which employers can build. Traditional jobs are changing and new jobs are created everyday. High paying but unskilled jobs are disappearing. Employers and employees share the belief that all workplaces must ‘work smarter.’

From there, the conversation moved quickly from diagnosis to prescription—though not always with much specificity.

Then, there were a lot of generic policy recommendations.

The Secretary summarized the three takeaways from the SCANS report:

(1) “All American high school students must develop a new set of competencies and foundation skills if they are to enjoy a productive, full, and satisfying life.” This recommendation even called for equipping students with more “know-how.” Apparently, only one-half of young people had such “know-how.” Of course, a similar shortage of “know-how” is drawing headlines and shaping congressional debates today.

(2) “The qualities of high performance that today characterize our most competitive companies must become the standard for the vast majority of our companies, large and small, local and global.” Specifically, workers must become “comfortable with technology and complex systems, skilled as members of teams, and [passionate] for continuous learning.”

(3) “The nation’s schools must be transformed into high-performance organizations in their own right.” Notably, efforts at transformation were already underway but “a decade of reform efforts” had amounted to “little improvement.”

What there wasn’t a lot of was political prioritization and commitment—I’m talking a decades-long focus on “transforming the nation’s schools into high-performance organizations.” The Secretary's report did not lead to a sustained overhaul of the nation’s educational infrastructure. School as of 1991 looks more or less like school as of 2026. Ambition, it turns out, is easy to articulate and hard to finance, defend, and sustain. Transformation of just about anything—let alone something with as much resistance to change as our educational system—requires incredible amounts of political and financial resources to be expedited over several years.

That’s a lesson we must heed today. Just about every actor in our political system is incentivized to think on two-year time horizons (if that). These conditions are not conducive to initiating and sticking with transformational projects. Such changes are only possible if the public champions these efforts, providing political cover to those who are willing to incur short-term losses for long-term gains.

So, will we update our educational and vocational infrastructure for the Age of AI?

It depends. Specifically, it depends on whether we can collectively muster the focus and persistence that’s inherent to successful political projects—and whether we are willing to treat this challenge as more than another familiar talking point in a long line of reports that warned us, correctly, and were then quietly shelved.


Kevin Frazier is a Senior Fellow at the Abundance Institute, directs the AI Innovation and Law Program at the University of Texas School of Law, and is an Affiliated Research Fellow at the Cato Institute.


Read More

Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less
AI - Its Use, Misuse, and Regulation
Glowing ai chip on a circuit board.
Photo by Immo Wegmann on Unsplash

AI - Its Use, Misuse, and Regulation

There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.

First, the good uses. There is no question that AI can outthink human beings, regardless of how famous or knowledgeable, because of the amount of information it can process in a short amount of time. The most powerful accounts I've read have been in the field of medical research: doctors have fed facts into AI, asking for a diagnosis or a possible remedy, and AI has come up with remarkable answers beyond the human mind's capability.

Keep ReadingShow less