Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Main Street AI: AI for the People

Opinion

Main Street AI: AI for the People

An illustration of AI chat boxes.

Getty Images, Andriy Onufriyenko

When Vice President J.D. Vance addressed the Paris AI Summit, he unknowingly made a strong case for public artificial intelligence (AI) infrastructure. His vision—of AI that empowers workers rather than displaces them, enables small businesses to compete with tech giants on a level playing field and delivers benefits to all Americans—cannot be achieved through private industry alone. What's needed is nothing less than an AI equivalent of the interstate highway system: a nationwide network of computational resources, shared data, and technical expertise that democratizes access to this transformative technology.

The challenge is clear. The National AI Opinion Monitor reveals a stark digital divide in AI adoption: higher-income urban professionals increasingly leverage AI tools to enhance their productivity, while rural and lower-income Americans remain largely locked out of the AI economy. Without intervention, AI threatens to become another force multiplier for existing inequalities.


The solution lies in a federal-state partnership that brings AI capabilities to Main Street. Here's how "Main Street AI" could work:

The federal government would establish a $100 billion matching grant program over five years for states to build local AI capacity. States would qualify for funding by meeting specific criteria:

First, they must establish an AI infrastructure authority with a governing board that includes representatives from small businesses, labor organizations, educational institutions, and community groups. This ensures local stakeholders have a voice in determining how AI resources are deployed.

Second, states must commit to a minimum 30% match of federal funds and demonstrate a plan for the long-term sustainability of their respective AI organizations. The federal contribution would be structured on a sliding scale, with higher matching rates for rural states and those with lower per capita incomes.

Third, states must develop comprehensive plans for four core components: computational infrastructure, data commons, workforce development, and energy resources. Given the staggering resources required to acquire these essential ingredients, they could enter into regional compacts with surrounding states.

The computational infrastructure component would create regional AI computing centers, typically housed at state universities or community colleges. These centers would provide cloud computing resources at subsidized rates to qualifying small businesses, researchers, and public agencies. Think of it as an AI library system, where local enterprises can "check out" computing power to develop and run their own AI applications.

The data commons would establish secure repositories of high-quality, annotated datasets relevant to local industries and challenges. A farming state might prioritize agricultural data for precision farming applications, while a coastal state might focus on climate and weather data for resilience planning. Residents would share this information with the understanding that resulting AI tools would be tailored to their needs and that the state would act as a responsible steward of their data.

Workforce development programs would combine traditional computer science education with practical AI training. Community colleges would offer AI certification programs designed in partnership with local employers. Mobile training units would bring AI literacy programs to rural communities, ensuring that technological advancement doesn't leave anyone behind.

The energy component would incentivize the development of renewable and reliable power sources to support AI computing needs, addressing both environmental concerns and the substantial power requirements of AI systems.

Consider how this might work in practice. Take Wisconsin, where dairy farmers struggle to compete with industrial-scale operations. Through the state's AI infrastructure authority, a cooperative of small dairy farmers could access computing resources to develop AI systems for herd health monitoring and milk production optimization. The local data commons would provide historical agricultural data to train these systems, while workforce programs would train farmers and their employees to use and maintain the technology.

These aren't mere hypotheticals. Several states have already begun experimenting with similar initiatives on a smaller scale. In North Carolina, the North Carolina Biotechnology Center has established a pilot program providing AI resources to local biotechnology startups. In Georgia, Illinois, New York, Ohio, and Colorado, select community colleges will develop novel programs for students to learn critical AI skills thanks to Complete College America, a nonprofit focused on increasing postsecondary attainment across the U.S. In Oklahoma, 10,000 residents will go through an AI essentials course at no cost thanks to the State’s support.

The federal program would accelerate, scale, and expand these efforts while ensuring that benefits reach beyond current tech hubs. By requiring states to meet specific criteria for funding, it would create accountability while allowing for local adaptation. The matching requirement would ensure state buy-in while the sliding scale would help level the playing field between wealthy and poor states.

This approach directly addresses the concerns Vice President Vance raised in Paris. It creates a pro-worker growth path by emphasizing augmentation over automation. It levels the playing field by giving small businesses access to resources currently monopolized by tech giants. It ensures all Americans benefit by embedding AI development within local communities and economies.

Critics might argue that this represents unnecessary government intervention in a thriving private market. But history shows that transformative technologies often require public investment to reach their full potential. The interstate highway system didn't eliminate private transportation companies—it created new opportunities for them while ensuring universal access to automotive transportation. Similarly, a public AI infrastructure wouldn't compete with private AI companies but would instead expand the market for AI applications while ensuring broader participation in the AI economy.

The question isn't whether America needs a public AI infrastructure—it's whether we'll build one before the opportunity for widespread AI development slips away. Vice President Vance has articulated the right goals. Now it's time for concrete action to achieve them.


Kevin Frazier is an Adjunct Professor at Delaware Law and an Emerging Technology Scholar at St. Thomas University College of Law.

Read More

Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less
AI - Its Use, Misuse, and Regulation
Glowing ai chip on a circuit board.
Photo by Immo Wegmann on Unsplash

AI - Its Use, Misuse, and Regulation

There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.

First, the good uses. There is no question that AI can outthink human beings, regardless of how famous or knowledgeable, because of the amount of information it can process in a short amount of time. The most powerful accounts I've read have been in the field of medical research: doctors have fed facts into AI, asking for a diagnosis or a possible remedy, and AI has come up with remarkable answers beyond the human mind's capability.

Keep ReadingShow less