Skip to content
Search

Latest Stories

Top Stories

AI shouldn’t scare us – but fearmongering should

OpenAI logo on a screen
NurPhoto/Getty Images

Lee is a public interest technologist and researcher in the Boston area, and public voices fellow with The OpEd Project.

The company behind ChatGPT, OpenAI, recently started investigating claims that its artificial intelligence platform is getting lazier. Such shortcomings are a far cry from the firing and rehiring saga of OpenAI’s CEO, Sam Altman, last month. Pundits speculated that Altman’s initial ousting was due to a project called Q*, which – unlike ChatGPT – was able to solve grade-school arithmetic. Q* was seen as a step towards artificial general intelligence (AGI) and therefore a possible existential threat to humanity. I disagree.


As a technologist who has published research employing Q-learning and worked under one of its pioneers, I was dumbfounded to scroll through dozens of these outrageous takes. Q-learning, a decades-old algorithm belonging to a branch of AI known as “reinforcement learning (RL),” is not new and is certainly not going to lead to the total destruction of humankind. Saying so is disingenuous and dangerous. The ability for Q* to solve elementary school equations says more about ChatGPT’s inability to do so than its supposedly fearsome capabilities – which are on par with a calculator. Like the proverbial monster under the bed, humanity’s real threat is not AI – it’s the fearmongering around it.

Sign up for The Fulcrum newsletter

The supposed existential threat of AI is rooted in the assumption that AI systems will become conscious and superintelligent – i.e., that AI will become AGI. A fringe theory then claims a conscious, superintelligent AGI could, either through malevolence or by accident, kill us all. Proponents of this extreme view, who use an extreme extension of utilitarianism known as longtermism, claim our ultimate imperative is thus to prevent “extinction threats” like AGI in order to prevent the total annihilation of humanity. If this sounds like a stretch of the imagination, it is.

This AI doomerism, espoused by people like OpenAI’s now former interim CEO, Emmett Shear, assumes that AGI is even a likely scenario. But as someone who has conducted research on cognition for over a decade, I’m not worried AI will become sentient. And AI experts, including one of the pioneers, agree. A chasm remains that cannot be bridged between human-like performance and human-like understanding. Even if an AI system appears to produce human-like behavior, copying is not comprehension – a speaking parrot is still a parrot. Further, there are still many tasks requiring abstraction where even state-of-the-art AI models fall well short of human performance, and many aspects of human cognition that remain ineffable, like consciousness.

Heeding false alarms over killer AGI has real-world, present-day consequences. It shifts otherwise valuable research priorities, avoids accountability for present harms, and distracts legislators from pushing for real solutions. Billions of dollars, university departments and whole companies have now pivoted to “AI safety.” By focusing on hypothetical threats, we forgo real threats like climate change, ironically likely sped up by the massive amounts of water used by servers running AI models. We ignore the ways marginalized communities are currently harmed by AI systems like automated hiring and predictive policing. We forget about ways to address these harms, like passing legislation to regulate tech companies and AI. And we entrench the power of the tech industry by focusing on its chosen solution and excusing it from culpability for these harms.

When it comes to the mysterious Q*, I’m sure the addition of Q-learning will improve ChatGPT’s performance. After all, an ongoing line of research, thankfully less over-hyped, already exists to use RL to improve large language models like ChatGPT, called reinforcement learning with human feedback. And a decade ago, RL already helped train AI systems to play Atari and beat the world champion of Go. These accomplishments were impressive, but are engineering feats. At the end of the day, it’s precisely the current impacts of human-engineered systems that we need to worry about. The threats are not in the future, they’re in the now.

In “The Wizard of Oz,” the protagonists are awed by the powerful Oz, an intimidating mystical figure that towers over them physically and metaphorically throughout their journey. Much later, the ruse is revealed: The much-feared wizard was simply a small, old man operating a set of cranks and levers.

Don’t let the doomers distract you. Q-learning, as with the rest of AI, is not a fearful, mystical being – it’s just an equation set in code, written by humans. Tech CEOs would like you to buy into their faulty math and not the real implications of their current AI products. But their logic doesn’t add up. Instead, we urgently need to tackle real problems by regulating the tech industry, protecting people from AI technologies like facial recognition and providing meaningful redress from AI harms. That is what we really owe the future.

Read More

Teenage girls lying on bed looking at smart phones
The Good Brigade/Getty Images

Instagram teen accounts: Just one front in the fight for mental health

Guillermo is the CEO of Ignite, a political leadership program for young women.

It’s good news that Instagram has launched stricter controls for teen accounts, strengthening privacy settings for those under 18. Underage users’ accounts are now automatically set to private mode. The platform is also implementing tighter restrictions on the type of content teens can browse and blocking material deemed sensitive, such as posts related to cosmetic procedures or eating disorders.

Keep ReadingShow less
Young man looking angry at display of his smartphone.

The inflammatory rhetoric, meaningless speculation and lack of fact checking by the media may result in young adults rejecting traditional platforms in favor of their well-being.

urbazon/Getty Images

By focusing on outrage, the media risks alienating younger audiences

Rikleen is executive director of Lawyers Defending American Democracy and the editor of “Her Honor – Stories of Challenge and Triumph from Women Judges.” Beougher is a junior at Amherst College and a co-founder ofStudents Strengthening American Democracy.

As attacks on democracy and the rule of law continually increase, much of the media refuses to address its role in intensifying the peril.

Instead of asking hard questions and insisting on answers, traditional media outlets increasingly trade news and facts for speculative commentary that ignores a story’s contextual significance. At the same time, social media outlets and influencers stoke anger as an alternative to thoughtfulness.

Keep ReadingShow less

Athens, GA., bookstore battles bans by stocking shelves

News Ambassadors is working to narrow the partisan divide through a collaborative journalism project to help American communities that hold different political views better understand each other, while giving student reporters a valuable learning experience in the creation of solutions reporting.

A program of the Bridge Alliance Education Fund, News Ambassadors is directed by Shia Levitt, a longtime public radio journalist who has reported for NPR, Marketplace and other outlets. Levitt has also taught radio reporting and audio storytelling at Brooklyn College in New York and at Mills College in Oakland, Calif., as well as for WNYC’s Radio Rookies program and other organizations.

Keep ReadingShow less
Woman looking off into the distance while holding her mobile phone

Seeing a lie or error corrected can make some people more skeptical of the fact-checker.

FG Trade/Getty Inages

Readers trust journalists less when they debunk rather than confirm claims

Stein is an associate professor of marketing at California State Polytechnic University, Pomona. Meyersohn is pursuing an Ed.S. in school psychology California State University, Long Beach.

Pointing out that someone else is wrong is a part of life. And journalists need to do this all the time – their job includes helping sort what’s true from what’s not. But what if people just don’t like hearing corrections?

Our new research, published in the journal Communication Research, suggests that’s the case. In two studies, we found that people generally trust journalists when they confirm claims to be true but are more distrusting when journalists correct false claims.

Keep ReadingShow less
FCC seal on a smart phone
Pavlo Gonchar/SOPA Images/LightRocket via Getty Images

Project 2025: Another look at the Federal Communications Commission

Biffle is a podcast host and contributor at BillTrack50.

This is part of a series offering a nonpartisan counter to Project 2025, a conservative guideline to reforming government and policymaking during the first 180 days of a second Trump administration. The Fulcrum's cross partisan analysis of Project 2025 relies on unbiased critical thinking, reexamines outdated assumptions, and uses reason, scientific evidence, and data in analyzing and critiquing Project 2025.

Project 2025, the Heritage Foundation’s policy and personnel proposals for a second Trump administration, has four main goals when it comes to the Federal Communications Commission: reining in Big Tech, promoting national security, unleashing economic prosperity, and ensuring FCC accountability and good governance. Today, we’ll focus on the first of those agenda items.

Keep ReadingShow less