Skip to content
Search

Latest Stories

Top Stories

AI shouldn’t scare us – but fearmongering should

OpenAI logo on a screen
NurPhoto/Getty Images

Lee is a public interest technologist and researcher in the Boston area, and public voices fellow with The OpEd Project.

The company behind ChatGPT, OpenAI, recently started investigating claims that its artificial intelligence platform is getting lazier. Such shortcomings are a far cry from the firing and rehiring saga of OpenAI’s CEO, Sam Altman, last month. Pundits speculated that Altman’s initial ousting was due to a project called Q*, which – unlike ChatGPT – was able to solve grade-school arithmetic. Q* was seen as a step towards artificial general intelligence (AGI) and therefore a possible existential threat to humanity. I disagree.


As a technologist who has published research employing Q-learning and worked under one of its pioneers, I was dumbfounded to scroll through dozens of these outrageous takes. Q-learning, a decades-old algorithm belonging to a branch of AI known as “reinforcement learning (RL),” is not new and is certainly not going to lead to the total destruction of humankind. Saying so is disingenuous and dangerous. The ability for Q* to solve elementary school equations says more about ChatGPT’s inability to do so than its supposedly fearsome capabilities – which are on par with a calculator. Like the proverbial monster under the bed, humanity’s real threat is not AI – it’s the fearmongering around it.

Sign up for The Fulcrum newsletter

The supposed existential threat of AI is rooted in the assumption that AI systems will become conscious and superintelligent – i.e., that AI will become AGI. A fringe theory then claims a conscious, superintelligent AGI could, either through malevolence or by accident, kill us all. Proponents of this extreme view, who use an extreme extension of utilitarianism known as longtermism, claim our ultimate imperative is thus to prevent “extinction threats” like AGI in order to prevent the total annihilation of humanity. If this sounds like a stretch of the imagination, it is.

This AI doomerism, espoused by people like OpenAI’s now former interim CEO, Emmett Shear, assumes that AGI is even a likely scenario. But as someone who has conducted research on cognition for over a decade, I’m not worried AI will become sentient. And AI experts, including one of the pioneers, agree. A chasm remains that cannot be bridged between human-like performance and human-like understanding. Even if an AI system appears to produce human-like behavior, copying is not comprehension – a speaking parrot is still a parrot. Further, there are still many tasks requiring abstraction where even state-of-the-art AI models fall well short of human performance, and many aspects of human cognition that remain ineffable, like consciousness.

Heeding false alarms over killer AGI has real-world, present-day consequences. It shifts otherwise valuable research priorities, avoids accountability for present harms, and distracts legislators from pushing for real solutions. Billions of dollars, university departments and whole companies have now pivoted to “AI safety.” By focusing on hypothetical threats, we forgo real threats like climate change, ironically likely sped up by the massive amounts of water used by servers running AI models. We ignore the ways marginalized communities are currently harmed by AI systems like automated hiring and predictive policing. We forget about ways to address these harms, like passing legislation to regulate tech companies and AI. And we entrench the power of the tech industry by focusing on its chosen solution and excusing it from culpability for these harms.

When it comes to the mysterious Q*, I’m sure the addition of Q-learning will improve ChatGPT’s performance. After all, an ongoing line of research, thankfully less over-hyped, already exists to use RL to improve large language models like ChatGPT, called reinforcement learning with human feedback. And a decade ago, RL already helped train AI systems to play Atari and beat the world champion of Go. These accomplishments were impressive, but are engineering feats. At the end of the day, it’s precisely the current impacts of human-engineered systems that we need to worry about. The threats are not in the future, they’re in the now.

In “The Wizard of Oz,” the protagonists are awed by the powerful Oz, an intimidating mystical figure that towers over them physically and metaphorically throughout their journey. Much later, the ruse is revealed: The much-feared wizard was simply a small, old man operating a set of cranks and levers.

Don’t let the doomers distract you. Q-learning, as with the rest of AI, is not a fearful, mystical being – it’s just an equation set in code, written by humans. Tech CEOs would like you to buy into their faulty math and not the real implications of their current AI products. But their logic doesn’t add up. Instead, we urgently need to tackle real problems by regulating the tech industry, protecting people from AI technologies like facial recognition and providing meaningful redress from AI harms. That is what we really owe the future.

Read More

Donald Trump on stage at the Republican National Convention

Former President Donald Trump speaks at the 2024 Republican National Convention on July 18.

J. Conrad Williams Jr.

Why Trump assassination attempt theories show lies never end

By: Michele Weldon: Weldon is an author, journalist, emerita faculty in journalism at Northwestern University and senior leader with The OpEd Project. Her latest book is “The Time We Have: Essays on Pandemic Living.”

Diamonds are forever, or at least that was the title of the 1971 James Bond movie and an even earlier 1947 advertising campaign for DeBeers jewelry. Tattoos, belief systems, truth and relationships are also supposed to last forever — that is, until they are removed, disproven, ended or disintegrate.

Lately we have questioned whether Covid really will last forever and, with it, the parallel pandemic of misinformation it spawned. The new rash of conspiracy theories and unproven proclamations about the attempted assassination of former President Donald Trump signals that the plague of lies may last forever, too.

Keep ReadingShow less
Computer image of a person speaking
ArtemisDiana/Getty Images

Overcoming AI voice cloning attacks on election integrity

Levine is an election integrity and management consultant who works to ensure that eligible voters can vote, free and fair elections are perceived as legitimate, and election processes are properly administered and secured.

Imagine it’s Election Day. You’re getting ready to go vote when you receive a call from a public official telling you to vote at an early voting location rather than your Election Day polling site. So, you go there only to discover it’s closed. Turns out that the call wasn’t from the public official but from a replica created by voice cloning technology.

That might sound like something out of a sci-fi movie, but many New Hampshire voters experienced something like it two days before the 2024 presidential primary. They received robocalls featuring a deepfake simulating the voice of President Joe Biden that discouraged them from participating in the primary.

Keep ReadingShow less
Robotic hand holding a ballot
Alfieri/Getty Images

What happens when voters cede their ballots to AI agents?

Frazier is an assistant professor at the Crump College of Law at St. Thomas University. Starting this summer, he will serve as a Tarbell fellow.

With the supposed goal of diversifying the electorate and achieving more representative results, State Y introduces “VoteGPT.” This artificial intelligence agent studies your social media profiles, your tax returns and your streaming accounts to develop a “CivicU.” This artificial clone would use that information to serve as your democratic proxy.

Keep ReadingShow less
Sen. Ron Johnson in front of a chart

Sen. Ron Johnson claims President Biden has allowed 1,700 terrorists to enter the country. That total refers to encounters (people who were stopped)

Tom Williams/CQ-Roll Call, Inc via Getty Images

Has President Joe Biden ‘let in’ nearly 1,700 people with links to terrorism?

This fact brief was originally published by Wisconsin Watch. Read the original here. Fact briefs are published by newsrooms in the Gigafact network, and republished by The Fulcrum. Visit Gigafact to learn more.

Has President Joe Biden ‘let in’ nearly 1,700 people with links to terrorism?

No.

Border agents have encountered individuals on the federal terrorist watchlist nearly 1,700 times since President Joe Biden took office — that means those people were stopped while trying to enter the U.S.

Keep ReadingShow less
Social media app icons
hapabapa/Getty Images

Urban planning can counter social media’s impact on young people

Dr. Jones is a grassroot urban planner, architectural designer, and public policy advocate. She was recently a public voice fellow through The OpEd Project.

Despite the breathtaking beauty of our world, many young people remain oblivious to it, ensnared by the all-consuming grip of social media. A recent Yale Medicine report revealed the rising negative impact social media has on teens, as this digital entrapment rewires their brains and leads to alarming mental and physical health struggles. Tragically, they are deprived of authentic life experiences, having grown up in a reality where speculation overshadows genuine interactions.

For the sake of our society’s future, we must urgently curb social media’s dominance and promote real-world exploration through urban planning that ensures accessible, enriching environments for all economic levels to safeguard the mental and physical health of the young.

Keep ReadingShow less