Skip to content
Search

Latest Stories

Follow Us:
Top Stories

What Happens to Online Discussion Forums When AI Is First Place People Turn?

Opinion

An illustration of an AI chatbot and an iphone.

AI is transforming how people seek help, share stories, and connect online. This article examines what’s at stake for social media and the future of human connection.

Getty Images, Malorny

No doubt social media and online discussion forums have played an integral role in most everyone’s daily digital lives. Today, more than 70% of the U.S. adults use social media, and over 5 billion people worldwide participate in online social platforms.

Discussion forums alone attract enormous engagement. Reddit has over 110 million daily active users, and an estimated 300 million use Q&A forums like Quora per month, and 100 million per month use StackExchange. People seek advice, learn from others’ experiences, share questions, or connect around interests and identities.


In mental health contexts, online peer support communities offer a place to share and disclose personal struggles, hear others’ experiences, and receive social support. Research supports the success of these online communities, which enable people to candidly self-disclose and seek support from others.

When people engage with personal narratives on peer-support sites, they often feel more confident in coping with stressful events. At the same time, these platforms also expose individuals to online trolling, harassment, misinformation, and other antisocial behaviors.

The familiar dynamics of those communities took a turn about three years ago, when conversational AI tools like ChatGPT entered the public sphere and quickly captured widespread attention.

This marked the start of a new era of interactive AI in day-to-day lives and society. People tried recipes, coding help, emotional support, creative writing, and more. The ease of asking a machine a question, receiving a coherent response, and doing so privately sparked a new kind of engagement.

Shortly after ChatGPT’s release, Google issued a “code red,” citing the tool’s rapid adoption as a threat to traditional search behavior and the information-seeking habits that long supported online forums.

At that moment, many wondered, "What is the future of social media and online discussion forums when people increasingly turn to AI instead of each other?"

That question warrants attention because the value of online communities depends on active participation. When fewer users post questions, share responses, or react to others, the foundational mechanisms of online forums—reciprocity, belongingness, narratives of shared experience—become weaker. If a user can ask an AI privately and immediately, the incentive to engage publicly changes.

A recent report by Anthropic warns that AI models can exhibit “natural emergent misalignment,” including reward hackingwhere systems learn to game feedback signals in ways humans did not intend. It’s a timely reminder that AI can sound empathetic and coherent without having lived experience or genuine understanding.

However, one of the crucial aspects of a supportive response from a peer is the presence of personal narratives and lived experiences. A systematic review noted that young people in online communities seek both informational and emotional support through stories of peers who have faced similar challenges. This suggests that narrative exchange is not simply transactional—it works when users engage as part of a community of peers, not as isolated speakers into the void.

By contrast, AI lacks lived experience. It can simulate empathy, but it cannot draw from a personal story.

In our research comparing AI-generated responses with peer responses in online communities, AI’s language was more formal, structured, and polite, but it rarely used first-person pronouns (which signal personal narratives).

Even when an AI’s replies appear personalized, they show limited diversity. Across many queries, the AI often reuses the same templates with minor variations. Online communities, in contrast, produce a range of viewpoints. Even a single question elicits diverse stories and perspectives from multiple individuals.

In a new study of teens' use of AI, results show more than 70% of teens have used AI companions, with one-third discussing serious personal issues with them rather than people.

Another survey found that 58% of users think that ChatGPT is “too nice,” and argue that the lack of realistic push-back undermines authenticity. These point to a tension: on the one hand, convenience and immediacy win; on the other, authenticity and narrative connection may suffer.

So what is at stake?

The forms of online social life are evolving. Large general-purpose platforms that once relied on high-volume question-and-answer interactions may see that function increasingly handled by AI. Participation may decline, not because people stop connecting, but because their first step becomes private and AI-mediated.

In that scenario, online community spaces may become more selective, more identity-driven, and oriented toward authentic human experience rather than mechanical problem-solving.

At the same time, platforms may adapt. Many already integrate AI to moderate content, summarize discussions, or help users articulate questions. In the future, AI may become part of the community infrastructure—filtering, guiding, even prompting human interaction, rather than replacing it.

The enduring value will be human presence: the voices of people who have lived the story, the shared recognition of someone else’s struggle, the sense of belonging created when users see that others have walked the same difficult path.

The future of social media, therefore, depends on which interactions people continue to value. Efficiency and convenience will not alone sustain the community. The presence of narratives rooted in human experience, and the recognition that someone else has faced a similar challenge, are what give forums their emotional traction. As AI becomes a more capable first responder, the discussion spaces that thrive will be those that prioritize experience, connection, and mutuality over instant answers.

In this emerging online and digital era, the question is not only whether people will use AI alongside communities—they already do. The more pressing question is how many choose AI instead of online (or offline) communities. The answer will determine not simply which platforms survive, but what form meaningful online connection takes in the years ahead.

The question then becomes, do people really need people?


Dr. Koustuv Saha is an Assistant Professor of Computer Science at the University of Illinois Urbana-Champaign’s (UIUC) Siebel School of Computing and Data Science and is a Public Voices Fellow of The OpEd Project. He studies how online technologies and AI shape and reveal human behaviors and wellbeing.


Read More

Powering the Future: Comparing U.S. Nuclear Energy Growth to French and Chinese Nuclear Successes

General view of Galileo Ferraris Ex Nuclear Power Plant on February 3, 2024 in Trino Vercellese, Italy. The former "Galileo Ferraris" thermoelectric power plant was built between 1991 and 1997 and opened in 1998.

Getty Images, Stefano Guidi

Powering the Future: Comparing U.S. Nuclear Energy Growth to French and Chinese Nuclear Successes

With the rise of artificial intelligence and a rapidly growing need for data centers, the U.S. is looking to exponentially increase its domestic energy production. One potential route is through nuclear energy—a form of clean energy that comes from splitting atoms (fission) or joining them together (fusion). Nuclear energy generates energy around the clock, making it one of the most reliable forms of clean energy. However, the U.S. has seen a decrease in nuclear energy production over the past 60 years; despite receiving 64 percent of Americans’ support in 2024, the development of nuclear energy projects has become increasingly expensive and time-consuming. Conversely, nuclear energy has achieved significant success in countries like France and China, who have heavily invested in the technology.

In the U.S., nuclear plants represent less than one percent of power stations. Despite only having 94 of them, American nuclear power plants produce nearly 20 percent of all the country’s electricity. Nuclear reactors generate enough electricity to power over 70 million homes a year, which is equivalent to about 18 percent of the electricity grid. Furthermore, its ability to withstand extreme weather conditions is vital to its longevity in the face of rising climate change-related weather events. However, certain concerns remain regarding the history of nuclear accidents, the multi-billion dollar cost of nuclear power plants, and how long they take to build.

Keep ReadingShow less
A U.S. flag flying before congress. Visual representation of technology, a glitch, artificial intelligence
As AI reshapes jobs and politics, America faces a choice: resist automation or embrace innovation. The path to prosperity lies in AI literacy and adaptability.
Getty Images, Douglas Rissing

Why Should I Be Worried About AI?

For many people, the current anxiety about artificial intelligence feels overblown. They say, “We’ve been here before.” Every generation has its technological scare story. In the early days of automation, factories threatened jobs. Television was supposed to rot our brains. The internet was going to end serious thinking. Kurt Vonnegut’s Player Piano, published in 1952, imagined a world run by machines and technocrats, leaving ordinary humans purposeless and sidelined. We survived all of that.

So when people today warn that AI is different — that it poses risks to democracy, work, truth, our ability to make informed and independent choices — it’s reasonable to ask: Why should I care?

Keep ReadingShow less
A person on their phone, using a type of artificial intelligence.

AI-generated “nudification” is no longer a distant threat—it’s harming students now. As deepfake pornography spreads in schools nationwide, educators are left to confront a growing crisis that outpaces laws, platforms, and parental awareness.

Getty Images, d3sign

How AI Deepfakes in Classrooms Expose a Crisis of Accountability and Civic Trust

While public outrage flares when AI tools like Elon Musk’s Grok generate sexualized images of adults on X—often without consent—schools have been dealing with this harm for years. For school-aged children, AI-generated “nudification” is not a future threat or an abstract tech concern; it is already shaping their daily lives.

Last month, that reality became impossible to ignore in Lafourche Parish, Louisiana. A father sued the school district after several middle school boys circulated AI-generated pornographic images of eight female classmates, including his 13-year-old daughter. When the girl confronted one of the boys and punched him on a school bus, she was expelled. The boy who helped create and spread the images faced no formal consequences.

Keep ReadingShow less
Democracies Don’t Collapse in Silence; They Collapse When Truth Is Distorted or Denied
a remote control sitting in front of a television
Photo by Pinho . on Unsplash

Democracies Don’t Collapse in Silence; They Collapse When Truth Is Distorted or Denied

Even with the full protection of the First Amendment, the free press in America is at risk. When a president works tirelessly to silence journalists, the question becomes unavoidable: What truth is he trying to keep the country from seeing? What is he covering up or trying to hide?

Democracies rarely fall in a single moment; they erode through a thousand small silences that go unchallenged. When citizens can no longer see or hear the truth — or when leaders manipulate what the public is allowed to know — the foundation of self‑government begins to crack long before the structure falls. When truth becomes negotiable, democracy becomes vulnerable — not because citizens stop caring, but because they stop receiving the information they need to act.

Keep ReadingShow less