Skip to content
Search

Latest Stories

Follow Us:
Top Stories

We’re All Entrepreneurs Now: Learning, Pivoting, and Thriving the Age of AI

Opinion

A close up of a person's hands typing on a laptop.

As AI reshapes the labor market, workers must think like entrepreneurs. Explore skills gaps, apprenticeships, and policy reforms shaping the future of work.

Getty Images, Maria Korneeva

What do a recent grad, a disenchanted employee, and a parent returning to the workforce all have in common? They’re each trying to determine which skills are in demand and how they can convince employers that they are competent in those fields. This is easier said than done.

Recent grads point to transcripts lined with As to persuade firms that they can add value. Firms, well aware of grade inflation, may scoff.


Disenchanted employees need to spend time training for the jobs of the future, perhaps by working toward a badge or credential. Firms are rightfully skeptical of those, too. After all, there are hundreds of thousands of certificates these days—it’s unclear which are meaningful.

Parents try to convince firms that they remain as skillful as ever by highlighting their earlier work. Here, again, firms may have some questions. A hiring manager may not shake the nagging feeling that extended time off the job may have caused version skills to atrophy.

In a healthy, efficient labor market, it’d be possible for these workers to signal their skills and to find firms demanding such work. The aforementioned barriers all stand in the way of such a market. The introduction of AI makes this labor matching even more difficult. Firms don’t know which skills to seek out because it’s unclear what work will be completed by humans, human-AI teams, or just AI. Workers, too, are at a loss—hoping that the skills they seek to gain align with those demanded by firms over the long run.

In this market failure—when informational asymmetries prevent workers and firms from finding one another in as cheap and timely a manner as possible—it’s tempting to call on the government to step in. The thinking goes that the government can predict which skills will define the future and can set up programs for upskilling and retraining. This logic falls by looking to the advice of many a government officials to lean into computer science. While some firms may demand some individuals with such skills, early returns from the Age of AI suggest demand is dropping.

What’s a student to do? How can someone finally leave their firm and find a better role? How can a mom or dad get back into the office and stay there?

The answer is simple and, perhaps, daunting: we’re all entrepreneurs now. We all must be attentive to market trends, adaptive to meaningful shifts in labor demand, and willing to work in novel and, at times, unpredictable environments. In short, the career ladder may be broken, but it’s been replaced by a career flywheel—studying when necessary, shadowing as a trainee, and working in flexible arrangements.

No one—including AI experts and government economists—can detail the specific set of skills that will result in high-paying work that supports families and sustains the American Dream. Everyone must be willing to take risks—diversifying, deepening, and shifting their skill sets to be as valuable in the labor market as possible.

Politicians ought not try to forecast those skills but rather sustain an entrepreneurial approach to skill development. Three proposals can help channel the necessary entrepreneurial energy without being too prescriptive.

First, emulate South Carolina’s success by encouraging firms to offer more apprenticeships. South Carolina has quietly built one of the most effective apprenticeship ecosystems in the country. Through its registered apprenticeship initiative, the state helps firms offset training costs, coordinate curriculum with community colleges, and design programs that respond to real production needs rather than abstract projections. The result is a pipeline that places people into paid roles while they learn—reducing the risk for workers and giving firms a chance to assess talent in real time.

Second, encourage the creation of skills-based evaluations by high schools and higher education institutions. Workers can better market their services—and firms can hire with more confidence—when skills are legible, portable, and comparable. Today’s degrees obscure more than they reveal. Grade inflation compresses distinctions, transcripts say little about applied competence, and employers are left guessing.

The Department of Education—and state equivalents—can help by issuing guidance that promotes competency-based transcripts, standardized skill taxonomies, and verified portfolios that document what students can actually do. It can also serve as an information clearinghouse, publishing data on which institutions and programs reliably produce particular skills and outcomes.

Third, reform—or abandon—New Deal employment laws with rigid classifications that hinder the ability of workers and firms alike to engage in creative, flexible, and mutually beneficial arrangements. The Fair Labor Standards Act, built for an economy of factory floors and fixed schedules, struggles to accommodate project-based work, part-time experimentation, and hybrid human-AI roles.

Its binary distinctions between employee and contractor, for one, discourage firms from offering flexible pathways and push workers into all-or-nothing choices. Modernizing these rules—by allowing more fractional work, clearer safe harbors, and updated definitions of hours and supervision—would expand opportunity without sacrificing baseline protections.

The future of work will not be handed down in a syllabus or codified in a regulation. It will be discovered—through trial, error, and adaptation—by people willing to build, learn, and pivot. Policy should not pretend to know which skills will win. It should instead clear the runway so more Americans can take off, test their wings, and land somewhere better.


Kevin Frazier is a Senior Fellow at the Abundance Institute, directs the AI Innovation and Law Program at the University of Texas School of Law, and is an Affiliated Research Fellow at the Cato Institute.


Read More

Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less