Skip to content
Search

Latest Stories

Top Stories

AI shouldn’t scare us – but fearmongering should

OpenAI logo on a screen
NurPhoto/Getty Images

Lee is a public interest technologist and researcher in the Boston area, and public voices fellow with The OpEd Project.

The company behind ChatGPT, OpenAI, recently started investigating claims that its artificial intelligence platform is getting lazier. Such shortcomings are a far cry from the firing and rehiring saga of OpenAI’s CEO, Sam Altman, last month. Pundits speculated that Altman’s initial ousting was due to a project called Q*, which – unlike ChatGPT – was able to solve grade-school arithmetic. Q* was seen as a step towards artificial general intelligence (AGI) and therefore a possible existential threat to humanity. I disagree.


As a technologist who has published research employing Q-learning and worked under one of its pioneers, I was dumbfounded to scroll through dozens of these outrageous takes. Q-learning, a decades-old algorithm belonging to a branch of AI known as “reinforcement learning (RL),” is not new and is certainly not going to lead to the total destruction of humankind. Saying so is disingenuous and dangerous. The ability for Q* to solve elementary school equations says more about ChatGPT’s inability to do so than its supposedly fearsome capabilities – which are on par with a calculator. Like the proverbial monster under the bed, humanity’s real threat is not AI – it’s the fearmongering around it.

Sign up for The Fulcrum newsletter

The supposed existential threat of AI is rooted in the assumption that AI systems will become conscious and superintelligent – i.e., that AI will become AGI. A fringe theory then claims a conscious, superintelligent AGI could, either through malevolence or by accident, kill us all. Proponents of this extreme view, who use an extreme extension of utilitarianism known as longtermism, claim our ultimate imperative is thus to prevent “extinction threats” like AGI in order to prevent the total annihilation of humanity. If this sounds like a stretch of the imagination, it is.

This AI doomerism, espoused by people like OpenAI’s now former interim CEO, Emmett Shear, assumes that AGI is even a likely scenario. But as someone who has conducted research on cognition for over a decade, I’m not worried AI will become sentient. And AI experts, including one of the pioneers, agree. A chasm remains that cannot be bridged between human-like performance and human-like understanding. Even if an AI system appears to produce human-like behavior, copying is not comprehension – a speaking parrot is still a parrot. Further, there are still many tasks requiring abstraction where even state-of-the-art AI models fall well short of human performance, and many aspects of human cognition that remain ineffable, like consciousness.

Heeding false alarms over killer AGI has real-world, present-day consequences. It shifts otherwise valuable research priorities, avoids accountability for present harms, and distracts legislators from pushing for real solutions. Billions of dollars, university departments and whole companies have now pivoted to “AI safety.” By focusing on hypothetical threats, we forgo real threats like climate change, ironically likely sped up by the massive amounts of water used by servers running AI models. We ignore the ways marginalized communities are currently harmed by AI systems like automated hiring and predictive policing. We forget about ways to address these harms, like passing legislation to regulate tech companies and AI. And we entrench the power of the tech industry by focusing on its chosen solution and excusing it from culpability for these harms.

When it comes to the mysterious Q*, I’m sure the addition of Q-learning will improve ChatGPT’s performance. After all, an ongoing line of research, thankfully less over-hyped, already exists to use RL to improve large language models like ChatGPT, called reinforcement learning with human feedback. And a decade ago, RL already helped train AI systems to play Atari and beat the world champion of Go. These accomplishments were impressive, but are engineering feats. At the end of the day, it’s precisely the current impacts of human-engineered systems that we need to worry about. The threats are not in the future, they’re in the now.

In “The Wizard of Oz,” the protagonists are awed by the powerful Oz, an intimidating mystical figure that towers over them physically and metaphorically throughout their journey. Much later, the ruse is revealed: The much-feared wizard was simply a small, old man operating a set of cranks and levers.

Don’t let the doomers distract you. Q-learning, as with the rest of AI, is not a fearful, mystical being – it’s just an equation set in code, written by humans. Tech CEOs would like you to buy into their faulty math and not the real implications of their current AI products. But their logic doesn’t add up. Instead, we urgently need to tackle real problems by regulating the tech industry, protecting people from AI technologies like facial recognition and providing meaningful redress from AI harms. That is what we really owe the future.

Read More

To help heal divides, we must cut “the media” some slack

Newspaper headline cuttings.

Getty Images / Sean Gladwell

To help heal divides, we must cut “the media” some slack

A few days ago, Donald Trump was inaugurated. In his second term, just as in his first, he’ll likely spark passionate disagreements about news media: what is “fake news” and what isn’t, which media sources should be trusted and which should be doubted.

We know we have a media distrust problem. Recently it hit an all-time low: the percentage of Americans with "not very much" trust in the media has risen from 27% in 2020 to 33% in 2024.

Keep ReadingShow less
King's Birmingham Jail Letter in Our Digital Times

Civil Rights Ldr. Rev. Martin Luther King Jr. speaking into mike after being released fr. prison for leading boycott.

(Photo by Donald Uhrbrock/Getty Images)

King's Birmingham Jail Letter in Our Digital Times

Sixty-two years after Rev. Dr. Martin Luther King’s pen touches paper in a Birmingham jail cell, I contemplate the walls that still divide us. Walls constructed in concrete to enclose Alabama jails, but in Silicon Valley, designed code, algorithms, and newsfeeds. King's legacy and prophetic words from that jail cell pierce our digital age with renewed urgency.

The words of that infamous letter burned with holy discontent – not just anger at injustice, but a more profound spiritual yearning for a beloved community. Witnessing our social fabric fray in digital spaces, I, too, feel that same holy discontent in my spirit. King wrote to white clergymen who called his methods "unwise and untimely." When I scroll through my social media feeds, I see modern versions of King's "white moderate" – those who prefer the absence of tension to the presence of truth. These are the people who click "like" on posts about racial harmony while scrolling past videos of police brutality. They share MLK quotes about dreams while sleeping through our contemporary nightmares.

Keep ReadingShow less
Trump Must Take Proactive Approach to AI and Jobs

Build a Software Development Team to Running Your Business Growth. Software Engineers on the project discuss a database design workflow and technical issues in a tech business office.

Getty Images//Stock Photo

Trump Must Take Proactive Approach to AI and Jobs


Artificial intelligence (AI) is rapidly disrupting America’s job market. Within the next decade, positions such as administrative assistants, cashiers, postal clerks, and data entry workers could be fully automated. Although the World Economic Forum expects a net increase of 78 million jobs, significant policy efforts will be required to support millions of displaced workers. The Trump administration should craft a comprehensive plan to tackle AI-driven job losses and ensure a fair transition for all.

As AI is expected to reshape nearly 40% of workers’ skills over the next five years, investing in workforce development is crucial. To be proactive, the administration should establish partnerships to provide subsidized retraining programs in high-demand fields like cybersecurity, healthcare, and renewable energy. Providing tax incentives for companies that implement in-house reskilling initiatives could further accelerate this transition.

Keep ReadingShow less
Teen girl reading unpleasant messages on mobile phone
Juan Algar/Getty Images

Holiday cards vs. the never-ending barrage of social media

“How we spend our days is how we spend our lives.” — Annie Dillard

There was a time, not so long ago, when holiday cards were the means by which acquaintances updated us on their lives. Often featuring family photos with everyone dressed up, or perhaps casual with a seaside or mountainside backdrop, it was understood this was a “best shot” curated to feature everybody happily together.

Those holiday cards were eagerly opened, shared and even saved. Occasionally they might broach boundaries of good taste, perhaps featuring a photo of the sender’s new Lexus shining brightly as the Christmas star, or containing more pages than an IKEA assembly pack and listing the fifth grader’s achievements. But most of the time these cards conveyed the annual family update and welcome holiday cheer.

Keep ReadingShow less