Skip to content
Search

Latest Stories

Top Stories

AI shouldn’t scare us – but fearmongering should

Opinion

OpenAI logo on a screen
NurPhoto/Getty Images

Lee is a public interest technologist and researcher in the Boston area, and public voices fellow with The OpEd Project.

The company behind ChatGPT, OpenAI, recently started investigating claims that its artificial intelligence platform is getting lazier. Such shortcomings are a far cry from the firing and rehiring saga of OpenAI’s CEO, Sam Altman, last month. Pundits speculated that Altman’s initial ousting was due to a project called Q*, which – unlike ChatGPT – was able to solve grade-school arithmetic. Q* was seen as a step towards artificial general intelligence (AGI) and therefore a possible existential threat to humanity. I disagree.


As a technologist who has published research employing Q-learning and worked under one of its pioneers, I was dumbfounded to scroll through dozens of these outrageous takes. Q-learning, a decades-old algorithm belonging to a branch of AI known as “ reinforcement learning (RL),” is not new and is certainly not going to lead to the total destruction of humankind. Saying so is disingenuous and dangerous. The ability for Q* to solve elementary school equations says more about ChatGPT’s inability to do so than its supposedly fearsome capabilities – which are on par with a calculator. Like the proverbial monster under the bed, humanity’s real threat is not AI – it’s the fearmongering around it.

The supposed existential threat of AI is rooted in the assumption that AI systems will become conscious and superintelligent – i.e., that AI will become AGI. A fringe theory then claims a conscious, superintelligent AGI could, either through malevolence or by accident, kill us all. Proponents of this extreme view, who use an extreme extension of utilitarianism known as longtermism, claim our ultimate imperative is thus to prevent “ extinction threats ” like AGI in order to prevent the total annihilation of humanity. If this sounds like a stretch of the imagination, it is.

This AI doomerism, espoused by people like OpenAI’s now former interim CEO, Emmett Shear, assumes that AGI is even a likely scenario. But as someone who has conducted research on cognition for over a decade, I’m not worried AI will become sentient. And AI experts, including one of the pioneers, agree. A chasm remains that cannot be bridged between human-like performance and human-like understanding. Even if an AI system appears to produce human-like behavior, copying is not comprehension – a speaking parrot is still a parrot. Further, there are still many tasks requiring abstraction where even state-of-the-art AI models fall well short of human performance, and many aspects of human cognition that remain ineffable, like consciousness.

Heeding false alarms over killer AGI has real-world, present-day consequences. It shifts otherwise valuable research priorities, avoids accountability for present harms, and distracts legislators from pushing for real solutions. Billions of dollars, university departments and whole companies have now pivoted to “AI safety.” By focusing on hypothetical threats, we forgo real threats like climate change, ironically likely sped up by the massive amounts of water used by servers running AI models. We ignore the ways marginalized communities are currently harmed by AI systems like automated hiring and predictive policing. We forget about ways to address these harms, like passing legislation to regulate tech companies and AI. And we entrench the power of the tech industry by focusing on its chosen solution and excusing it from culpability for these harms.

When it comes to the mysterious Q*, I’m sure the addition of Q-learning will improve ChatGPT’s performance. After all, an ongoing line of research, thankfully less over-hyped, already exists to use RL to improve large language models like ChatGPT, called reinforcement learning with human feedback. And a decade ago, RL already helped train AI systems to play Atari and beat the world champion of Go. These accomplishments were impressive, but are engineering feats. At the end of the day, it’s precisely the current impacts of human-engineered systems that we need to worry about. The threats are not in the future, they’re in the now.

In “The Wizard of Oz,” the protagonists are awed by the powerful Oz, an intimidating mystical figure that towers over them physically and metaphorically throughout their journey. Much later, the ruse is revealed: The much-feared wizard was simply a small, old man operating a set of cranks and levers.

Don’t let the doomers distract you. Q-learning, as with the rest of AI, is not a fearful, mystical being – it’s just an equation set in code, written by humans. Tech CEOs would like you to buy into their faulty math and not the real implications of their current AI products. But their logic doesn’t add up. Instead, we urgently need to tackle real problems by regulating the tech industry, protecting people from AI technologies like facial recognition and providing meaningful redress from AI harms. That is what we really owe the future.

Read More

Fear of AI Makes for Bad Policy
Getty Images

Fear of AI Makes for Bad Policy

Fear is the worst possible response to AI. Actions taken out of fear are rarely a good thing, especially when it comes to emerging technology. Empirically-driven scrutiny, on the other hand, is a savvy and necessary reaction to technologies like AI that introduce great benefits and harms. The difference is allowing emotions to drive policy rather than ongoing and rigorous evaluation.

A few reminders of tech policy gone wrong, due, at least in part, to fear, helps make this point clear. Fear is what has led the US to become a laggard in nuclear energy, while many of our allies and adversaries enjoy cheaper, more reliable energy. Fear is what explains opposition to autonomous vehicles in some communities, while human drivers are responsible for 120 deaths per day, as of 2022. Fear is what sustains delays in making drones more broadly available, even though many other countries are tackling issues like rural access to key medicine via drones.

Keep ReadingShow less
A child looking at a smartphone.

With autism rates doubling every decade, scientists are reexamining environmental and behavioral factors. Could the explosion of social media use since the 1990s be influencing neurodevelopment? A closer look at the data, the risks, and what research must uncover next.

Getty Images, Arindam Ghosh

The Increase in Autism and Social Media – Coincidence or Causal?

Autism has been in the headlines recently because of controversy over Robert F. Kennedy, Jr's statements. But forgetting about Kennedy, autism is headline-worthy because of the huge increase in its incidence over the past two decades and its potential impact on not just the individual children but the health and strength of our country.

In the 1990s, a new definition of autism—ASD (Autism Spectrum Disorder)—was universally adopted. Initially, the prevalence rate was pretty stable. In the year 2,000, with this broader definition and better diagnosis, the CDC estimated that one in 150 eight-year-olds in the U.S. had an autism spectrum disorder. (The reports always study eight-year-olds, so this data was for children born in 1992.)

Keep ReadingShow less
Tech, Tribalism, and the Erosion of Human Connection
Ai technology, Artificial Intelligence. man using technology smart robot AI, artificial intelligence by enter command prompt for generates something, Futuristic technology transformation.
Getty Images - stock photo

Tech, Tribalism, and the Erosion of Human Connection

One of the great gifts of the Enlightenment age was the centrality of reason and empiricism as instruments to unleash the astonishing potential of human capacity. Great Enlightenment thinkers recognized that human beings have the capacity to observe the universe and rely on logical thinking to solve problems.

Moreover, these were not just lofty ideals; Benjamin Franklin and Denis Diderot demonstrated that building our collective constitution of knowledge could greatly enhance human prosperity not only for the aristocratic class but for all participants in the social contract. Franklin’s “Poor Richard’s Almanac” and Diderot and d’Alembert’s “Encyclopédie” served as the Enlightenment’s machines de guerre, effectively providing broad access to practical knowledge, empowering individuals to build their own unique brand of prosperity.

Keep ReadingShow less
The limits of free speech protections in American broadcasting

FCC Chairman Brendan Carr testifies in Washington on May 21, 2025.

The limits of free speech protections in American broadcasting

The chairman of the Federal Communications Commission is displeased with a broadcast network. He makes his displeasure clear in public speeches, interviews and congressional testimony.

The network, afraid of the regulatory agency’s power to license their owned-and-operated stations, responds quickly. They change the content of their broadcasts. Network executives understand the FCC’s criticism is supported by the White House, and the chairman implicitly represents the president.

Keep ReadingShow less