Skip to content
Search

Latest Stories

Follow Us:
Top Stories

AI leaves us no choice but to learn from the past

Opinion

computer circuitry
Jonathan Kitchen/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University. He previously clerked for the Montana Supreme Court.

Generations from now, historians will wonder why we choose to leave certain communities behind while allowing technology to race ahead. They’ll point out that we had plenty of examples to learn from — and, yet, prioritized “progress” over people.

The lessons we could have learned are tragically obvious. In the 1850s and ’60s, the telegraph took off and people assumed it would serve a great civic purpose. Instead, Western Union captured the market, jacked up the message rates and restricted the use of this powerful technology to the already powerful. Next, in the 1990s and 2000s, the internet inspired us all to dream of a more connected future. Instead, a digital divide has formed – leaving certain communities and individuals without meaningful access to an increasingly essential technology. Others surely could list other similar examples.


Clear steps could have mitigated those outcomes. Generations of postmasters general called for increasing access to the telegraph network; Congress said it was too expensive. Likewise, for decades advocates have been calling on the government to invest in the infrastructure necessary to bring reliable, high-speed internet to every home; again, equal access to opportunity was deemed too costly.

So far, the introduction of artificial intelligence seems to fit this pattern: Despite it having the potential to benefit billions, it’s been harnessed by those already benefiting from the last technological advance. And, while it’s true that some AI use cases may have tremendous benefits for all, those benefits seem likely to first go to those already financially secure and technologically savvy.

Reversing the historical trend of technological progress causing inequality to expand and become more entrenched isn’t going to be cheap but it’s imperative if we want AI to live up to its potential.

First things first, we have to make sure all Americans have access to the Internet. COVID-19 reminded us of the digital divide and, for a brief moment, led to massive government spending that helped increase access to educational, cultural and professional opportunities. That funding appears to be going the way of the dodo bird. President Joe Biden ought to insist on internet access being a core part of our national AI strategy. Absent making access a priority, we’re bound to repeat a problematic past.

Second, the government should — at a minimum — nudge and — more appropriately — subsidize the development of AI models specifically addressing the needs and challenges facing communities that have traditionally been on the losing end of similar advances.

Third, AI labs should release annual societal impact statements. Such reports would give policymakers and the public a chance to evaluate whether the pros of AI advances really outweigh the cons.

All of this will cost money, require time and (likely) delay the rate of AI development and deployment. Nevertheless, it's an investment in our community and our collective potential. If any of the three steps above were pursued, my hunch is that history will celebrate a shift in our priorities from profit and “progress” to people and patience.

What’s clear is that we cannot afford to stick with the traditional playbook. Technology must always be viewed as a tool — one we can deploy, delay and ... gasp ... decide to forgo. That’s right — AI is not the solution to everything and AI should not be allowed to upend every aspect of our individual and collective affairs.

Learning from the past is dang hard. But there’s still time for us to redirect the future by reorienting our approach to AI in the present. For too long certain Americans have been digitally forgotten; AI has given us a chance to remind ourselves of the importance of aligning technology with the public interest.


Read More

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal
Getty Images, Kmatta

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal

Background

The Health Insurance Portability and Accountability Act (HIPAA) was enacted in 1996 to protect sensitive health information from being disclosed without patients’ consent. Under this act, a patient’s privacy is safeguarded through the enforcement of strict standards on managing, transmitting, and storing health information.

Keep ReadingShow less
Two people looking at screens.

A case for optimism, risk-taking, and policy experimentation in the age of AI—and why pessimism threatens technological progress.

Getty Images, Andriy Onufriyenko

In Defense of AI Optimism

Society needs people to take risks. Entrepreneurs who bet on themselves create new jobs. Institutions that gamble with new processes find out best to integrate advances into modern life. Regulators who accept potential backlash by launching policy experiments give us a chance to devise laws that are based on evidence, not fear.

The need for risk taking is all the more important when society is presented with new technologies. When new tech arrives on the scene, defense of the status quo is the easier path--individually, institutionally, and societally. We are all predisposed to think that the calamities, ailments, and flaws we experience today--as bad as they may be--are preferable to the unknowns tied to tomorrow.

Keep ReadingShow less
Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less