Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Education is Key to Winning the AI Revolution

Opinion

Education is Key to Winning the AI Revolution

Two young students engaging in STEM studies.

Getty Images, Kmatta

As the Department of Education faces rounds of layoffs and threats of dissolution, prompted by the Department of Government Efficiency (DOGE), it is urgent to rethink and rededicate efforts to strengthen, broaden, and enhance STEM education from early childhood through post-secondary programs.

In order to realize the promise of an AI-driven future, technology and education leaders must address the persistent gaps between supply and demand for all highly skilled technical workers in the U.S.


This follows the recent activity of Elon Musk announcing the launch of the latest version of his company xAI's Grok model, South Korea banning downloads of Deep Seek, and President Donald Trump's promise of the $500 billion Stargate Project to create thousands of U.S. jobs. The urgent importance of OpenAI for this country is undeniable.

While some experts focus on the potential human job losses associated with the overall integration of AI tools, it is rewarding to see that the promise of Stargate and more recognizes that people will be the engine of the new economy. To do so, however, it is urgent to build the human infrastructure necessary to support this future work.

The Bureau of Labor Statistics reports a projected job growth in the U.S. for information security analysts of 33 percent from 2023 to 2033, with nearly 181,000 jobs in this field in 2023. In 2024, there were reportedly 457,433 openings “requesting cybersecurity-related skills,” CyberSeek reports, with 83 qualified workers for every 100 jobs. These job numbers are indicative of the larger tech workforce.

During his first term, Trump established the Presidential Cybersecurity Education Award in 2019 under his Executive Order on America’s Cybersecurity Workforce. The U.S. Department of Education administers this award that honors the work of primary and secondary educators who are preparing students to effectively navigate a cyber-enabled world.

Even as the administration talks of dismantling and distributing federal education dollars under the Department of Education to state houses, it is necessary to maintain a unified standard for STEM education. American competitiveness requires that all students who will comprise the workforce and will lead the nation forward have the strategic skills and competency to innovate in the future. It is not sufficient to simply leave the future to chance.

Rather, the DOE needs to remain to establish the framework for the national priority of digital sciences and tech advancement by implementing a unified message and guidance on AI to make cybersecurity and all technology a national priority.

Federal and state policymakers, educators, advocates, and tech leaders must guard against the propensity for individual states to set different standards that may unduly disadvantage some students. STEM education from primary through higher education must have national policies to make sure there is a level of consistency across states.

In 2023, The White House came out with the National Cyber Workforce & Education Strategy, outlining objectives, steps, and outcomes for resources, training, recruiting, retention, and advancement of the U.S. cyber economy. Updated last year, the strategy outlines the need for lifelong investment in cyber skills, leading to a citizenry equipped with digital literacy and computational skills. This is the ideal approach and needs to be enforced.

Workforce developers must also take full advantage of programs to upskill and reskill existing employees as they leverage internal labor markets to fulfill workforce needs.

Recent workforce studies point to a lack of supply. However, some experts question the nature of the need. There is an oversupply for some roles, an undersupply of others, and a disconnect between the expectations of employers and candidates. Employers question if the talent pool is weak or if they are seeking over-credentialed candidates. This may be unrealistic so that new employees can’t easily fulfill their roles.

The barriers to a robust talent pool for a competent cybersecurity workforce include insufficient resources in education from primary to secondary to higher education, potential restrictions on H-1B visas, and new policies on diverse candidate hiring.

Cybersecurity is a rapidly blooming field with the global market valued at $190.4 billion and expected to grow to $248.5 billion in 2028, research shows. Despite decades of work to produce a workforce of sufficient quality and quantity, our own research shows that positions continue to be unfilled.

To be successful in the evolving cybersecurity workforce—and the entire evolving tech workforce—individuals need to be able to create arguments, do research, analyze data, experiment, think critically, and employ scientific reasoning so that they will adapt successfully with the skills they need.

An innovative and creative future tech workforce depends on a community of critical thinkers with varying points of view, experiences, backgrounds, and voices. When there is an assault on sources of expertise and intellectual knowledge due to certain identities of race, gender, or ability, the value assigned to individuals becomes less about what they know and more about who they represent.

Serving as executive director of the Shahal M. Khan Cyber and Economic Security Institute at American University, I directly see the need for the responsibility of training the future tech workforce with a fair and just path of entry, growth, and advancement. This mission goes beyond politics and transcends the term limits of any administration.

The U.S. is certainly among the top global leaders in the practice of cybersecurity and digital innovation in terms of education, policy development, and implementation. America is expected to generate the most revenue globally in cybersecurity by the end of 2025, with a sum of $88.25 billion.

The projected tech job growth in the U.S. is from six million jobs in 2024 to 7.1 million jobs in 2034, according to the Computing Technology Industry Association's 2024 State of the Tech Workforce.

With new projects emerging, the possibilities seem limitless. The time to educate for the future is now.

Diana l. Burley, PhD, is Vice Provost for Research and Innovation, Professor of Public Administration and Executive Director, Khan Institute for Cyber and Economic Security at American University.

Read More

Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less
AI - Its Use, Misuse, and Regulation
Glowing ai chip on a circuit board.
Photo by Immo Wegmann on Unsplash

AI - Its Use, Misuse, and Regulation

There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.

First, the good uses. There is no question that AI can outthink human beings, regardless of how famous or knowledgeable, because of the amount of information it can process in a short amount of time. The most powerful accounts I've read have been in the field of medical research: doctors have fed facts into AI, asking for a diagnosis or a possible remedy, and AI has come up with remarkable answers beyond the human mind's capability.

Keep ReadingShow less