Skip to content
Search

Latest Stories

Top Stories

An AI future worth building

Opinion

artificial intelligence
Vithun Khamsong/Getty Images

Coral is vice president of technology and democracy programs and head of the Open Technology Institute at New America. She is a public voices fellow with The OpEd Project.

2023 was the year of artificial intelligence. But much of the discussion has centered around extremes – the possibility of extinction versus the opportunity to exceed human capacity. But Reshma Saujani, the founder of Girls Who Code, suggests that we don’t have to choose between ethical AI and innovative AI, and that if we focus solely on fear then that just might be the AI future we get. So how do we foster an AI future worth building?

In some ways, innovations like ChatGPT represent uncharted territory in the realm of technology. Having worked at the intersection of government and public interest technology for nearly 20 years, I know that AI is not new, and the past year’s intense focus mirrors previous digital tech waves. But I would offer that as we think about how AI evolves, there are three important lessons from the past that we should consider in order to properly harness the benefits of this technology for the public good.


The first lesson serves as a clear warning: Timelines are often detached from the technology's true readiness. Just as with autonomous vehicles and commercial Big Data initiatives, industry-set transformation timelines are often prematurely optimistic, driven by investor desires to scale. Much of this is what drives rapid deployment without the adequate social deliberation and scrutiny, thereby jeopardizing safety. We’ve seen the impacts on the road and in cities, and with AI we’re seeing the exponential growth of online nonconsensual images and deep fakes.

Second, these technologies have lacked the go-to market strategies that undercut their ability to scale. They have eventually stalled in funding and development, in part, I would argue, because they lacked a clear public value. While we can marvel at the idea of being picked up by an autonomous car or navigating a “smart city,” all of these technologies need paying customers. Government procurement cycles failed to transform cities into data-driven metropolises of the future, and AVs are too expensive for the average driver. OpenAI has only just released a business version for ChatGPT and pricing is not public. The monetization strategy of these tools are still in development.

During my tenure at the Knight Foundation, we invested $5.25 million to support public engagement in cities where autonomous vehicles were deployed to understand sentiment and engage communities on their deployment. Demonstrations and community engagement were essential to addressing the public’s skepticism and sparking curiosity. What was eye-opening to me was that regardless of how complex the technology, communities could envision beneficial use cases and public value. But their vision differed from technologists and investor priorities, as in the case of autonomous delivery technologies. Bridging this gap can speed up adoption.

Lastly, widespread adoption of AI is unlikely without the proper infrastructure. A peer-reviewed analysis recently released, showed that by 2027, AI servers may use as much annual electricity as Argentina. Such a massive amount of energy will undoubtedly raise concerns regarding AIs impact on the environment, but it also calls into question our capacity to meet the moment. Additionally, AI requires fast internet. The United States has only just begun to roll out $42.5 billion in funding to expand high-speed internet access so that we can finally close the digital divide. If we care about equity, we must ensure that everyone has access to the fast internet they need to benefit from AI.

To be sure, every tech advance has differences, so we cannot fully expect to use historical tech advances, like Smart Cities or autonomous vehicles, to predict how AI will evolve. But looking to history is important, because it often repeats itself, and many of the issues encountered by former technologies will come into play with AI, too.

To scale AI responsibly, fast, affordable internet is crucial but almost 20 percent of Americans are currently left out. Congress can take action by renewing programs for affordable internet access and ensuring Bipartisan Infrastructure Law investments align with an AI future. The public value of AI can be enhanced by not relying solely on investor interests. While most Americans are aware of ChatGPT, only one in five have actually used it. We need proactive engagement from all stakeholders – including governments, civil society and private enterprises – to shape the AI future in ways that bring tangible benefits to all. True public engagement, especially from marginalized communities, will be key to ensuring that the full extent of unintended consequences is explored. No group can speak to the impact of AI on a particular selection of people better than the impacted individuals, and we have to get better at engaging on the ground.

Some of the greatest value of AI lies in applications and services that can augment skills, productivity and innovation for the public good. Not only digital access, but also digital readiness, is essential to harness these benefits. Congress can mandate federal agencies invest in initiatives supporting digital readiness, particularly for youth, workers and those with accessibility challenges.

But there is no need to rush.

By taking a cue from historical tech advances, like Smart Cities and autonomous vehicles, we can usher in an AI revolution that evolves equitably and sets a precedent for technological progress done right. Only then can we truly unlock the transformative power of AI and create a brighter, more inclusive future for all.

Read More

“There is a real public hunger for accurate, local, fact-based information”

Monica Campbell

Credit Ximena Natera

“There is a real public hunger for accurate, local, fact-based information”

At a time when democracy feels fragile and newsrooms are shrinking, Monica Campbell has spent her career asking how journalism can still serve the public good. She is Director of the California Local News Fellowship at the University of California, Berkeley, and a former editor at The Washington Post and The World. Her work has focused on press freedom, disinformation, and the civic role of journalism. In this conversation, she reflects on the state of free press in the United States, what she learned reporting in Latin America, and what still gives her hope for the future of the profession.

You have worked in both international and U.S. journalism for decades. How would you describe the current state of press freedom in the United States?

Keep ReadingShow less
Person on a smartphone.

The digital public square rewards outrage over empathy. To save democracy, we must redesign our online spaces to prioritize dialogue, trust, and civility.

Getty Images, Tiwaporn Khemwatcharalerd

Rebuilding Civic Trust in the Age of Algorithmic Division

A headline about a new education policy flashes across a news-aggregation app. Within minutes, the comment section fills: one reader suggests the proposal has merit; a dozen others pounce. Words like idiot, sheep, and propaganda fly faster than the article loads. No one asks what the commenter meant. The thread scrolls on—another small fire in a forest already smoldering.

It’s a small scene, but it captures something larger: how the public square has turned reactive by design. The digital environments where citizens now meet were built to reward intensity, not inquiry. Each click, share, and outrage serves an invisible metric that prizes attention over understanding.

Keep ReadingShow less
Congress Must Lead On AI While It Still Can
a computer chip with the letter a on top of it
Photo by Igor Omilaev on Unsplash

Congress Must Lead On AI While It Still Can

Last month, Matthew and Maria Raine testified before Congress, describing how their 16-year-old son confided suicidal thoughts to AI chatbots, only to be met with validation, encouragement, and even help drafting a suicide note. The Raines are among multiple families who have recently filed lawsuits alleging that AI chatbots were responsible for their children’s suicides. Their deaths, now at the center of lawsuits against AI companies, underscore a similar argument playing out in federal courts: artificial intelligence is no longer an abstraction of the future; it is already shaping life and death.

And these teens are not outliers. According to Common Sense Media, a nonprofit dedicated to improving the lives of kids and families, 72 percent of teenagers report using AI companions, often relying on them for emotional support. This dependence is developing far ahead of any emerging national safety standard.

Keep ReadingShow less
A person on using a smartphone.

With millions of child abuse images reported annually and AI creating new dangers, advocates are calling for accountability from Big Tech and stronger laws to keep kids safe online.

Getty Images, ljubaphoto

Parents: It’s Time To Get Mad About Online Child Sexual Abuse

Forty-five years ago this month, Mothers Against Drunk Driving had its first national press conference, and a global movement to stop impaired driving was born. MADD was founded by Candace Lightner after her 13-year-old daughter was struck and killed by a drunk driver while walking to a church carnival in 1980. Terms like “designated driver” and the slogan “Friends don’t let friends drive drunk” came out of MADD’s campaigning, and a variety of state and federal laws, like a lowered blood alcohol limit and legal drinking age, were instituted thanks to their advocacy. Over time, social norms evolved, and driving drunk was no longer seen as a “folk crime,” but a serious, conscious choice with serious consequences.

Movements like this one, started by fed-up, grieving parents working with law enforcement and law makers, worked to lower road fatalities nationwide, inspire similar campaigns in other countries, and saved countless lives.

Keep ReadingShow less