While artificial intelligence offers undeniable benefits in organizing information and streamlining learning, its growing role in journalism—especially in coverage of government—raises urgent questions about media literacy and democratic accountability.
Fulcrum Fellow, Jared Tucker, explores how efficiency often comes at the cost of critical thinking,
Sitting in the back row of a lecture hall can provide a window into current college life.
With rows and rows of computer screens all pointed towards you, you can see students online shopping, diligently taking notes, and of course, using AI.
AI use amongst college students has become widespread, with the Global Education Council finding that 86% of students are using AI regularly in a global survey. While language learning models (LLMs) can help students organize notes or learn content efficiently, their negative effects on critical thinking have left educators scrambling to find ways to curb their use while still acknowledging their effectiveness.
“All new technology, no matter what it is, has a panic cycle, but this is a big, futuristic view of what learning and life will look like,” said Caley Cook, the Journalism and Public Interest Communication Coordinator at the University of Washington. “This is having and will continue to have impacts on decision-making and critical thinking for everybody. It is really detrimental for students to outsource critical thinking at a university where they are learning to learn.”
A 2025 study from Phys.org found a significant correlation between poor critical thinking scores and the use of AI tools. As students increasingly rely on AI to complete work, they risk missing out on the intended benefits of the tools in their education.
“To get a degree is to practice as a thinker, and that’s a muscle,” Cook said. “You have to use it over and over again and fail and try again to make that muscle stronger.”
But if AI can potentially rob students of their education, why do they keep using it?
Well, college is hard. Students have to juggle their classwork, homework, social life, and work life. With LLM’s ability to spurt out immediate answers, it can offer relief to the backbreaking college workload.
“[AI is] so quick and easy that nowadays many students don’t have the time to sit and study for hours every day,” Tzuriel Jennings, a rising junior at the University of Washington, said. “Many people use it to just hurry and rush through an assignment that they don’t have a lot of time or willpower to get through.”
But even beyond its ability to lighten the load, LLMs have been revolutionary in bridging the language barrier, aiding in student research, and in organizing information.
“AI can definitely help with studying, especially when figuring out how/what to study when trying to cram,” Jennings said. “I use it to help me create flash cards or study guides to save me hours of planning before actually studying.”
And this is where the complex problems in AI lie. It’s a tool that can help students progress their learning but can also fry their critical thinking skills. How can a professor ban something that helps students, but allow something that damages their ability to learn?
“If the work can be done by AI in a positive, useful, and timesaving way, then AI is a good use for that,” Cook said. “The piece you miss when you don’t struggle … is that you mistakenly think that the learning is easy — that there is a right and wrong answer and that you can quickly get to the right answer.”
This problem has also manifested itself in journalism, whose very existence is threatened by AI. Long-reliable outlets such as the Associated Press have already begun experimenting with AI to draft articles and report news. Is the escape from tediousness worth the risk to the United States' only constitutionally protected industry?
Unfortunately, AI is relatively new and constantly evolving, making solutions to curb its use more difficult. Students can easily work around tricks hidden in prompts, and pen-and-paper is not always the most accessible option, forcing many professors to embrace this new age of education.
“Those of us who are taking this seriously have moved to more group work, oral assessment, in-class work, and thinking about how students are using assessments and using the learning inside and outside of the class,” Cook said. “I change my classes every year to adapt to the moment.”
Students would agree. Transparency is the answer.
“I think [professors] should be honest with their students about their expectations of AI,” Jennings said. “Some students will always use it regardless, but creating an open and honest environment surrounding AI can be helpful.”
But as LLMs continue to scour new data, evolve, and change their style, their ability to damage students’ educational experience only grows. In this incredibly unique issue, today’s solutions won’t be able to solve tomorrow’s problems.Jared Tucker is a sophomore at the University of Washington — Seattle studying Journalism and Public Interest Communication with a minor in History.
Jared Tucker, a student at the University of Washington is a cohort member with the Fulcrum Fellowship.