Skip to content
Search

Latest Stories

Top Stories

AI shouldn’t scare us – but fearmongering should

OpenAI logo on a screen
NurPhoto/Getty Images

Lee is a public interest technologist and researcher in the Boston area, and public voices fellow with The OpEd Project.

The company behind ChatGPT, OpenAI, recently started investigating claims that its artificial intelligence platform is getting lazier. Such shortcomings are a far cry from the firing and rehiring saga of OpenAI’s CEO, Sam Altman, last month. Pundits speculated that Altman’s initial ousting was due to a project called Q*, which – unlike ChatGPT – was able to solve grade-school arithmetic. Q* was seen as a step towards artificial general intelligence (AGI) and therefore a possible existential threat to humanity. I disagree.


As a technologist who has published research employing Q-learning and worked under one of its pioneers, I was dumbfounded to scroll through dozens of these outrageous takes. Q-learning, a decades-old algorithm belonging to a branch of AI known as “ reinforcement learning (RL),” is not new and is certainly not going to lead to the total destruction of humankind. Saying so is disingenuous and dangerous. The ability for Q* to solve elementary school equations says more about ChatGPT’s inability to do so than its supposedly fearsome capabilities – which are on par with a calculator. Like the proverbial monster under the bed, humanity’s real threat is not AI – it’s the fearmongering around it.

The supposed existential threat of AI is rooted in the assumption that AI systems will become conscious and superintelligent – i.e., that AI will become AGI. A fringe theory then claims a conscious, superintelligent AGI could, either through malevolence or by accident, kill us all. Proponents of this extreme view, who use an extreme extension of utilitarianism known as longtermism, claim our ultimate imperative is thus to prevent “ extinction threats ” like AGI in order to prevent the total annihilation of humanity. If this sounds like a stretch of the imagination, it is.

This AI doomerism, espoused by people like OpenAI’s now former interim CEO, Emmett Shear, assumes that AGI is even a likely scenario. But as someone who has conducted research on cognition for over a decade, I’m not worried AI will become sentient. And AI experts, including one of the pioneers, agree. A chasm remains that cannot be bridged between human-like performance and human-like understanding. Even if an AI system appears to produce human-like behavior, copying is not comprehension – a speaking parrot is still a parrot. Further, there are still many tasks requiring abstraction where even state-of-the-art AI models fall well short of human performance, and many aspects of human cognition that remain ineffable, like consciousness.

Heeding false alarms over killer AGI has real-world, present-day consequences. It shifts otherwise valuable research priorities, avoids accountability for present harms, and distracts legislators from pushing for real solutions. Billions of dollars, university departments and whole companies have now pivoted to “AI safety.” By focusing on hypothetical threats, we forgo real threats like climate change, ironically likely sped up by the massive amounts of water used by servers running AI models. We ignore the ways marginalized communities are currently harmed by AI systems like automated hiring and predictive policing. We forget about ways to address these harms, like passing legislation to regulate tech companies and AI. And we entrench the power of the tech industry by focusing on its chosen solution and excusing it from culpability for these harms.

When it comes to the mysterious Q*, I’m sure the addition of Q-learning will improve ChatGPT’s performance. After all, an ongoing line of research, thankfully less over-hyped, already exists to use RL to improve large language models like ChatGPT, called reinforcement learning with human feedback. And a decade ago, RL already helped train AI systems to play Atari and beat the world champion of Go. These accomplishments were impressive, but are engineering feats. At the end of the day, it’s precisely the current impacts of human-engineered systems that we need to worry about. The threats are not in the future, they’re in the now.

In “The Wizard of Oz,” the protagonists are awed by the powerful Oz, an intimidating mystical figure that towers over them physically and metaphorically throughout their journey. Much later, the ruse is revealed: The much-feared wizard was simply a small, old man operating a set of cranks and levers.

Don’t let the doomers distract you. Q-learning, as with the rest of AI, is not a fearful, mystical being – it’s just an equation set in code, written by humans. Tech CEOs would like you to buy into their faulty math and not the real implications of their current AI products. But their logic doesn’t add up. Instead, we urgently need to tackle real problems by regulating the tech industry, protecting people from AI technologies like facial recognition and providing meaningful redress from AI harms. That is what we really owe the future.

Read More

From TikTok to Telehealth: 3 Ways Medicine Must Evolve to Reach Gen Z
person wearing lavatory gown with green stethoscope on neck using phone while standing

From TikTok to Telehealth: 3 Ways Medicine Must Evolve to Reach Gen Z

Ask people how much they expect to change over the next 10 years, and most will say “not much.” Ask them how much they’ve changed in the past decade, and the answer flips. Regardless of age, the past always feels more transformative than the future.

This blind spot has a name: the end-of-history illusion. The result is a persistent illusion that life, and the values and behaviors that shape it, will remain unchanged.

Keep ReadingShow less
The Importance of Being Media Literate

An image depicting a group of people of varying ages interacting with different forms of media, such as smartphones, tablets, and laptops.

AI generated

The Importance of Being Media Literate

Information is constantly on our phones, and we receive notifications for almost everything happening in the world, which can be overwhelming to many. Information is given to us in an instant, and more often than you think, we don’t even know what exactly we are reading.

We don’t even know if the information we see is accurate or makes sense. Media literacy goes beyond what we learn in school; it’s a skill that grows as we become more aware and critical of the information we consume.

Keep ReadingShow less
Fox News’ Selective Silence: How Trump’s Worst Moments Vanish From Coverage
Why Fox News’ settlement with Dominion Voting Systems is good news for all media outlets
Getty Images

Fox News’ Selective Silence: How Trump’s Worst Moments Vanish From Coverage

Last week, the ultraconservative news outlet, NewsMax, reached a $73 million settlement with the voting machine company, Dominion, in essence, admitting that they lied in their reporting about the use of their voting machines to “rig” or distort the 2020 presidential election. Not exactly shocking news, since five years later, there is no credible evidence to suggest any malfeasance regarding the 2020 election. To viewers of conservative media, such as Fox News, this might have shaken a fully embraced conspiracy theory. Except it didn’t, because those viewers haven’t seen it.

Many people have a hard time understanding why Trump enjoys so much support, given his outrageous statements and damaging public policy pursuits. Part of the answer is due to Fox News’ apparent censoring of stories that might be deemed negative to Trump. During the past five years, I’ve tracked dozens of examples of news stories that cast Donald Trump in a negative light, including statements by Trump himself, which would make a rational person cringe. Yet, Fox News has methodically censored these stories, only conveying rosy news that draws its top ratings.

Keep ReadingShow less
U.S. Flag / artificial intelligence / technology / congress / ai

The age of AI warrants asking if the means still further the ends—specifically, individual liberty and collective prosperity.

Getty Images, Douglas Rissing

Liberty and the General Welfare in the Age of AI

If the means justify the ends, we’d still be operating under the Articles of Confederation. The Founders understood that the means—the governmental structure itself—must always serve the ends of liberty and prosperity. When the means no longer served those ends, they experimented with yet another design for their government—they did expect it to be the last.

The age of AI warrants asking if the means still further the ends—specifically, individual liberty and collective prosperity. Both of those goals were top of mind for early Americans. They demanded the Bill of Rights to protect the former, and they identified the latter—namely, the general welfare—as the animating purpose for the government. Both of those goals are being challenged by constitutional doctrines that do not align with AI development or even undermine it. A full review of those doctrines could fill a book (and perhaps one day it will). For now, however, I’m just going to raise two.

Keep ReadingShow less