Skip to content
Search

Latest Stories

Top Stories

An AI Spark Worth Spreading

An AI Spark Worth Spreading

People working with AI technology.

Getty Images, Maskot

In the rapidly evolving landscape of artificial intelligence, policymakers face a delicate balancing act: fostering innovation while addressing legitimate concerns about AI's potential impacts. Representative Michael Keaton’s proposed HB 1833, also known as the Spark Act, represents a refreshing approach to this challenge—one that Washington legislators would be right to pass and other states would be wise to consider.

As the AI Innovation and Law Fellow at the University of Texas at Austin School of Law, I find the Spark Act particularly promising. By establishing a grant program through the Department of Commerce to promote innovative uses of AI, Washington's legislators have a chance to act on a fundamental truth: technological diffusion is essential to a dynamic economy, widespread access to opportunity, and the inspiration of future innovation.


The history of technological advancement in America reveals a consistent pattern. When new technologies remain concentrated in the hands of a few, their economic and social benefits remain similarly concentrated. On the other hand, when technological tools become widely available—as happened with personal computers in the 1980s or internet access in the 1990s on through today (though too many remain on the wrong side of the digital divide)—we witness explosive growth in unexpected innovations and broader economic participation.

HB 1833 wisely prioritizes several key elements that deserve particular commendation. The bill's emphasis on ethical AI use, risk analysis, small business participation, and statewide impact reflects a nuanced understanding of how to foster responsible innovation. By requiring applicants to share their technology with the state and demonstrate a clear public benefit, the program ensures that taxpayer investments yield broader societal returns.

The involvement of Washington's AI task force in identifying state priorities further strengthens the approach. This collaborative model between government, industry, and presumably academia creates a framework for ongoing dialogue about AI development—a far more productive approach than imposing rigid restrictions based on speculative concerns.

While regulatory frameworks for AI are necessary and inevitable, premature or excessive regulation risks several negative consequences. First, burdensome compliance costs disproportionately impact startups and smaller labs, potentially cementing the dominance of tech giants who can easily absorb these expenses. This would ironically undermine the competitive marketplace that effective regulation aims to protect.

Second, regulatory approaches that begin from a place of suspicion rather than a balanced assessment may perpetuate unfounded negative perceptions of AI. Public discourse already tends toward dystopian narratives that overshadow AI's transformative potential in healthcare, environmental protection, education, and accessibility. Policy should be informed by a complete picture—acknowledging risks while recognizing benefits.

Washington's approach appears to recognize what history has repeatedly demonstrated: innovation rarely follows predictable paths. The personal computer, the internet, and smartphones all produced applications and implications that their early developers could never have anticipated. By creating space for experimentation while establishing guardrails around ethical use and risk assessment, the Spark Act creates a framework for responsible innovation.

Other states considering AI policy would do well to study Washington's example. Rather than racing to implement restrictive regulations that may quickly become obsolete or counterproductive, states can establish programs that simultaneously promote innovation while gathering the practical experience necessary to inform more targeted regulation where truly needed.

The technological transformation unfolding before us holds tremendous promise for addressing long-standing societal challenges—but only if we resist the urge to stifle it before it has the chance to develop. Washington's legislators deserve recognition for charting a path that neither ignores legitimate concerns nor sacrifices the potential benefits of AI advancement.

In the coming years, the states that thrive economically will likely be those that find this balance—creating frameworks that promote responsible AI innovation while ensuring its benefits are widely shared. The Spark Act represents a promising step in that direction, one that merits both our attention and our support. The Senate should follow the House's lead in passing this important piece of legislation.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

shallow focus photography of computer codes
Shahadat Rahman on Unsplash

When Rules Can Be Code, They Should Be!

Ninety years ago this month, the Federal Register Act was signed into law in a bid to shine a light on the rules driving President Franklin Roosevelt’s New Deal—using the best tools of the time to make government more transparent and accountable. But what began as a bold step toward clarity has since collapsed under its own weight: over 100,000 pages, a million rules, and a public lost in a regulatory haystack. Today, the Trump administration’s sweeping push to cut red tape—including using AI to hunt obsolete rules—raises a deeper challenge: how do we prevent bureaucracy from rebuilding itself?

What’s needed is a new approach: rewriting the rule book itself as machine-executable code that can be analyzed, implemented, or streamlined at scale. Businesses could simply download and execute the latest regulations on their systems, with no need for costly legal analysis and compliance work. Individuals could use apps or online tools to quickly figure out how rules affect them.

Keep ReadingShow less
Microchip labeled "AI"
Preparing for an inevitable AI emergency
Eugene Mymrin/Getty Images

Nvidia and AMD’s China Chip Deal Sets Dangerous Precedent in U.S. Industrial Policy

This morning’s announcement that Nvidia and AMD will resume selling AI chips to China on the condition that they surrender 15% of their revenue from those sales to the U.S. government marks a jarring inflection point in American industrial policy.

This is not just a transaction workaround for a particular situation. This is a major philosophical government policy shift.

Keep ReadingShow less
Doctor using AI technology
Akarapong Chairean/Getty Images

Generative AI Can Save Lives: Two Diverging Paths In Medicine

Generative AI is advancing at breakneck speed. Already, it’s outperforming doctors on national medical exams and in making difficult diagnoses. Microsoft recently reported that its latest AI system correctly diagnosed complex medical cases 85.5% of the time, compared to just 20% for physicians. OpenAI’s newly released GPT-5 model goes further still, delivering its most accurate and responsive performance yet on health-related queries.

As GenAI tools double in power annually, two distinct approaches are emerging for how they might help patients.

Keep ReadingShow less