Skip to content
Search

Latest Stories

Top Stories

AI shouldn’t scare us – but fearmongering should

OpenAI logo on a screen
NurPhoto/Getty Images

Lee is a public interest technologist and researcher in the Boston area, and public voices fellow with The OpEd Project.

The company behind ChatGPT, OpenAI, recently started investigating claims that its artificial intelligence platform is getting lazier. Such shortcomings are a far cry from the firing and rehiring saga of OpenAI’s CEO, Sam Altman, last month. Pundits speculated that Altman’s initial ousting was due to a project called Q*, which – unlike ChatGPT – was able to solve grade-school arithmetic. Q* was seen as a step towards artificial general intelligence (AGI) and therefore a possible existential threat to humanity. I disagree.


As a technologist who has published research employing Q-learning and worked under one of its pioneers, I was dumbfounded to scroll through dozens of these outrageous takes. Q-learning, a decades-old algorithm belonging to a branch of AI known as “reinforcement learning (RL),” is not new and is certainly not going to lead to the total destruction of humankind. Saying so is disingenuous and dangerous. The ability for Q* to solve elementary school equations says more about ChatGPT’s inability to do so than its supposedly fearsome capabilities – which are on par with a calculator. Like the proverbial monster under the bed, humanity’s real threat is not AI – it’s the fearmongering around it.

Sign up for The Fulcrum newsletter

The supposed existential threat of AI is rooted in the assumption that AI systems will become conscious and superintelligent – i.e., that AI will become AGI. A fringe theory then claims a conscious, superintelligent AGI could, either through malevolence or by accident, kill us all. Proponents of this extreme view, who use an extreme extension of utilitarianism known as longtermism, claim our ultimate imperative is thus to prevent “extinction threats” like AGI in order to prevent the total annihilation of humanity. If this sounds like a stretch of the imagination, it is.

This AI doomerism, espoused by people like OpenAI’s now former interim CEO, Emmett Shear, assumes that AGI is even a likely scenario. But as someone who has conducted research on cognition for over a decade, I’m not worried AI will become sentient. And AI experts, including one of the pioneers, agree. A chasm remains that cannot be bridged between human-like performance and human-like understanding. Even if an AI system appears to produce human-like behavior, copying is not comprehension – a speaking parrot is still a parrot. Further, there are still many tasks requiring abstraction where even state-of-the-art AI models fall well short of human performance, and many aspects of human cognition that remain ineffable, like consciousness.

Heeding false alarms over killer AGI has real-world, present-day consequences. It shifts otherwise valuable research priorities, avoids accountability for present harms, and distracts legislators from pushing for real solutions. Billions of dollars, university departments and whole companies have now pivoted to “AI safety.” By focusing on hypothetical threats, we forgo real threats like climate change, ironically likely sped up by the massive amounts of water used by servers running AI models. We ignore the ways marginalized communities are currently harmed by AI systems like automated hiring and predictive policing. We forget about ways to address these harms, like passing legislation to regulate tech companies and AI. And we entrench the power of the tech industry by focusing on its chosen solution and excusing it from culpability for these harms.

When it comes to the mysterious Q*, I’m sure the addition of Q-learning will improve ChatGPT’s performance. After all, an ongoing line of research, thankfully less over-hyped, already exists to use RL to improve large language models like ChatGPT, called reinforcement learning with human feedback. And a decade ago, RL already helped train AI systems to play Atari and beat the world champion of Go. These accomplishments were impressive, but are engineering feats. At the end of the day, it’s precisely the current impacts of human-engineered systems that we need to worry about. The threats are not in the future, they’re in the now.

In “The Wizard of Oz,” the protagonists are awed by the powerful Oz, an intimidating mystical figure that towers over them physically and metaphorically throughout their journey. Much later, the ruse is revealed: The much-feared wizard was simply a small, old man operating a set of cranks and levers.

Don’t let the doomers distract you. Q-learning, as with the rest of AI, is not a fearful, mystical being – it’s just an equation set in code, written by humans. Tech CEOs would like you to buy into their faulty math and not the real implications of their current AI products. But their logic doesn’t add up. Instead, we urgently need to tackle real problems by regulating the tech industry, protecting people from AI technologies like facial recognition and providing meaningful redress from AI harms. That is what we really owe the future.

Read More

An AI Spark Worth Spreading

People working with AI technology.

Getty Images, Maskot

An AI Spark Worth Spreading

In the rapidly evolving landscape of artificial intelligence, policymakers face a delicate balancing act: fostering innovation while addressing legitimate concerns about AI's potential impacts. Representative Michael Keaton’s proposed HB 1833, also known as the Spark Act, represents a refreshing approach to this challenge—one that Washington legislators would be right to pass and other states would be wise to consider.

As the AI Innovation and Law Fellow at the University of Texas at Austin School of Law, I find the Spark Act particularly promising. By establishing a grant program through the Department of Commerce to promote innovative uses of AI, Washington's legislators have a chance to act on a fundamental truth: technological diffusion is essential to a dynamic economy, widespread access to opportunity, and the inspiration of future innovation.

Keep ReadingShow less
Trump’s Gambit: Trade Tariff Relief For a TikTok Sale

TikTok icon on a phone.

Getty Images, 5./15 WEST

Trump’s Gambit: Trade Tariff Relief For a TikTok Sale

You know things aren’t going well in the negotiations for the U.S. operations of TikTok when President Trump has to bribe the Chinese government with billions in tariff relief.

But that’s exactly what was reported out of the White House. President Trump is willing to give the Chinese Communist Party (CCP) billions in tariff relief if they pressured TikTok to sell its U.S. operations before the April 5th deadline.

Keep ReadingShow less
Who gets to ask questions at the White House?

WASHINGTON, DC, USA –– White House Press Secretary Karoline Leavitt answers questions from journalists on Jan. 28, 2025.

(Joshua Sukoff/Medill News Service)

Who gets to ask questions at the White House?

WASHINGTON — As the Trump administration increasingly welcomes vloggers and social media influencers into press briefings and the Oval Office, established outlets like the Associated Press find themselves excluded from the century-old press pool, sparking controversy about what "transparency" truly means.

Watch the video report here:

Keep ReadingShow less
Lost Sams and Missing Fei-Feis: Why America Needs AI Guides Now

Students studying robotics.

Getty Images, eyesfoto

Lost Sams and Missing Fei-Feis: Why America Needs AI Guides Now

In 2018, Economist Raj Chetty and his colleagues revealed a sobering truth: talent is everywhere, but opportunity is not. Their research on "Lost Einsteins" demonstrated that countless young Americans with the potential to be great inventors never get the chance to develop their skills simply because they lack exposure to innovation and mentorship. The data was clear: if a child grows up in an area with a high concentration of inventors, they are far more likely to become one themselves. But for too many, particularly those in rural and lower-income communities, the door to innovation remains closed. Failure to find those “Lost Einsteins” has deprived us all of a better future. Chetty forecasted that "if women, minorities, and children from low-income families were to invent at the same rate as white men from high-income (top 20%) families, the rate of innovation in America would quadruple." That’s a more prosperous, dynamic America.

The introduction of artificial intelligence (AI) carries the promise of realizing that brighter future if we learn from our prior mistakes. A lack of broad exposure among our youth to AI and the individuals shaping its development threatens to leave behind an entire generation of would-be entrepreneurs, scholars, and thought leaders. We risk creating "Lost Sams"—referring to OpenAI's Sam Altman as a stand-in for AI innovators—and "Missing Fei-Feis"—a nod to Stanford AI researcher Fei-Fei Li. Without urgent action, we will reinforce the existing gaps in AI leadership, limiting who gets to shape the future of this transformative technology.

Keep ReadingShow less