Skip to content
Search

Latest Stories

Top Stories

An AI future worth building

artificial intelligence
Vithun Khamsong/Getty Images

Coral is vice president of technology and democracy programs and head of the Open Technology Institute at New America. She is a public voices fellow with The OpEd Project.

2023 was the year of artificial intelligence. But much of the discussion has centered around extremes – the possibility of extinction versus the opportunity to exceed human capacity. But Reshma Saujani, the founder of Girls Who Code, suggests that we don’t have to choose between ethical AI and innovative AI, and that if we focus solely on fear then that just might be the AI future we get. So how do we foster an AI future worth building?

In some ways, innovations like ChatGPT represent uncharted territory in the realm of technology. Having worked at the intersection of government and public interest technology for nearly 20 years, I know that AI is not new, and the past year’s intense focus mirrors previous digital tech waves. But I would offer that as we think about how AI evolves, there are three important lessons from the past that we should consider in order to properly harness the benefits of this technology for the public good.


The first lesson serves as a clear warning: Timelines are often detached from the technology's true readiness. Just as with autonomous vehicles and commercial Big Data initiatives, industry-set transformation timelines are often prematurely optimistic, driven by investor desires to scale. Much of this is what drives rapid deployment without the adequate social deliberation and scrutiny, thereby jeopardizing safety. We’ve seen the impacts on the road and in cities, and with AI we’re seeing the exponential growth of online nonconsensual images and deep fakes.

Sign up for The Fulcrum newsletter

Second, these technologies have lacked the go-to market strategies that undercut their ability to scale. They have eventually stalled in funding and development, in part, I would argue, because they lacked a clear public value. While we can marvel at the idea of being picked up by an autonomous car or navigating a “smart city,” all of these technologies need paying customers. Government procurement cycles failed to transform cities into data-driven metropolises of the future, and AVs are too expensive for the average driver. OpenAI has only just released a business version for ChatGPT and pricing is not public. The monetization strategy of these tools are still in development.

During my tenure at the Knight Foundation, we invested $5.25 million to support public engagement in cities where autonomous vehicles were deployed to understand sentiment and engage communities on their deployment. Demonstrations and community engagement were essential to addressing the public’s skepticism and sparking curiosity. What was eye-opening to me was that regardless of how complex the technology, communities could envision beneficial use cases and public value. But their vision differed from technologists and investor priorities, as in the case of autonomous delivery technologies. Bridging this gap can speed up adoption.

Lastly, widespread adoption of AI is unlikely without the proper infrastructure. A peer-reviewed analysis recently released, showed that by 2027, AI servers may use as much annual electricity as Argentina. Such a massive amount of energy will undoubtedly raise concerns regarding AIs impact on the environment, but it also calls into question our capacity to meet the moment. Additionally, AI requires fast internet. The United States has only just begun to roll out $42.5 billion in funding to expand high-speed internet access so that we can finally close the digital divide. If we care about equity, we must ensure that everyone has access to the fast internet they need to benefit from AI.

To be sure, every tech advance has differences, so we cannot fully expect to use historical tech advances, like Smart Cities or autonomous vehicles, to predict how AI will evolve. But looking to history is important, because it often repeats itself, and many of the issues encountered by former technologies will come into play with AI, too.

To scale AI responsibly, fast, affordable internet is crucial but almost 20 percent of Americans are currently left out. Congress can take action by renewing programs for affordable internet access and ensuring Bipartisan Infrastructure Law investments align with an AI future. The public value of AI can be enhanced by not relying solely on investor interests. While most Americans are aware of ChatGPT,only one in five have actually used it. We need proactive engagement from all stakeholders – including governments, civil society and private enterprises – to shape the AI future in ways that bring tangible benefits to all. True public engagement, especially from marginalized communities, will be key to ensuring that the full extent of unintended consequences is explored. No group can speak to the impact of AI on a particular selection of people better than the impacted individuals, and we have to get better at engaging on the ground.

Some of the greatest value of AI lies in applications and services that can augment skills, productivity and innovation for the public good. Not only digital access, but also digital readiness, is essential to harness these benefits. Congress can mandate federal agencies invest in initiatives supporting digital readiness, particularly for youth, workers and those with accessibility challenges.

But there is no need to rush.

By taking a cue from historical tech advances, like Smart Cities and autonomous vehicles, we can usher in an AI revolution that evolves equitably and sets a precedent for technological progress done right. Only then can we truly unlock the transformative power of AI and create a brighter, more inclusive future for all.

Read More

Remote control in hand to change channels​.

Remote control in hand to change channels.

Getty Images, Stefano Madrigali

Late-Night Comedy: How Satire Became America’s Most Trusted News Source

A close friend of mine recently confessed to having stopped watching cable news altogether because it was causing him and his wife anxiety and dread. They began watching Jimmy Kimmel instead, saying the nightly news felt like "psychological warfare" on their mental state. "We want to know what's going on but can't handle the relentless doom and gloom every night," he told me.

Jimmy Kimmel, host of ABC's Jimmy Kimmel Live, seems to understand this shift. "A year ago, I would've said I'm hoping to show people who aren't paying attention to the news what's actually going on," he told Rolling Stone last month in an interview. "Now I see myself more as a place to scream."

Keep ReadingShow less
The Biggest Obstacle to Safer Roads Isn't Technology, It's Politics

A 3D generated image of modern vehicles with AI assistance.

Getty Images, gremlin

The Biggest Obstacle to Safer Roads Isn't Technology, It's Politics

Let’s be honest: does driving feel safe anymore? Ask anyone navigating the daily commute, especially in notoriously chaotic places like Miami, and you’ll likely hear a frustrated, perhaps even expletive-laden, "No!" That gut feeling isn't paranoia; it's backed by grim statistics. Over 200 people died on Travis County roads in 2023, according to Vision Zero ATX. Nationally, tens of thousands perish in preventable crashes. It's a relentless public health crisis we've somehow numbed ourselves to, with a staggering cost measured in shattered families and lost potential.

But imagine a different reality, one where that daily fear evaporates. What if I told you that the technology to dramatically reduce this carnage isn't science fiction but sitting right under our noses? Autonomous vehicles (AVs), or self-driving cars, are here and rapidly improving. Leveraging breakthroughs in AI, these vehicles are increasingly outperforming human drivers, proving to be significantly less likely to cause accidents, especially those resulting in injury. Studies suggest that replacing human drivers with AVs could drastically cut road fatalities. Even achieving just 10% AV penetration on our roads might improve traffic safety by as much as 50%, with those gains likely to grow exponentially as the technology becomes more sophisticated and widespread.

Keep ReadingShow less
Warning: Trump’s Tariffs Pose Obstacles for AI Development

Humans are using laptops and computers to interact with AI, helping them create, code, train AI, or analyze big data with fast, cutting-edge technology.

Getty Images/Wanan Yossingkum

Warning: Trump’s Tariffs Pose Obstacles for AI Development

Huiyan Li

WASHINGTON – During a House Energy and Commerce Committee hearing on April 9, Democratic representatives repeatedly raised concerns that President Trump’s new tariffs and attempts to repeal the Inflation Reduction Act would harm U.S. competitiveness in artificial intelligence (AI).

Keep ReadingShow less
Docuseries Highlights Need for Legal Protections for Kid Influencers

child holding smartphone

Getty Images/Keiko Iwabuchi

Docuseries Highlights Need for Legal Protections for Kid Influencers

A new Netflix docuseries explores the unseen complexities and dark possibilities of child influencing in our modern internet age, raising urgent questions and highlighting the critical need for legal protections for kid influencers once their internet presence turns into work—a full-time job that, at times, financially supports their families.

Released last week, “Bad Influence: The Dark Side of Kidfluencing” shares how Youtube star Piper Rockelle—who began posting videos at eight years old and garnered 12 million subscribers and about 1.87 billion views—and her “Squad” of fellow pre-teen social media influencers worked and lived in a toxic environment under Rockelle's "momager", Tiffany Smith, and Smith's boyfriend, Hunter Hill.

Keep ReadingShow less