Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Artificial: New AI tools create opportunity to choose convenience over real human engagement

Artificial: New AI tools create opportunity to choose convenience over real human engagement
Getty Images

Kevin Frazier is an Assistant Professor at the Crump College of Law at St. Thomas University. He previously clerked for the Montana Supreme Court.

New AI tools, like ChatGPT, threaten that horrible, wonderful process of trying to find the right words. Even as I typed that sentence, words suggested by my phone danced above the keyboard—passively steering me but directing me nonetheless.


These simple tools save time, right? And, they assuredly reduce typos, correct? Maybe they even help us communicate with one another by increasing the odds of everyone using similar phrases, that’s a good thing?

Soon AI tools will offer to replace our critical thinking in other contexts too. Need to decide who to vote for? In the near future you may engage with AI chatbots trained to emulate political candidates -- rather than go door to door, these candidates will develop and release bots that aim to persuade you to vote a certain way. Who needs the Iowa State Fair to evaluate a candidate in person when you can just ask “the candidate” any question you want by “talking” with their bot?

AI tools also shape what news we read and social media comments we see--in fact, they have done so for several years. And, in some cases, AI tools have taken over the “boring” parts of our jobs. Some lawyers, for instance, have turned to ChatGPT to conduct legal research and review documents.

Are these gains in convenience worth the loss? No. In fact, it’s the sort of deal that the playground bully would offer - trading you the basketball with a leak for your spot on the best swing.

The lesson is that convenience always comes at a cost.

So what are we unwilling to give up for a little more convenience? If we don’t identify the skills, tasks, and activities that are fundamental to being human, then there’s a chance that AI will not only address those core parts of being human but actually reduce our ability and willingness to do the very things that distinguish and define us. Folks in the AI safety space call this “enfeeblement” -- I prefer to think of it as a loss of our humanity.

Our willingness to embrace the added seconds or minutes or, god forbid, hours to do something without the aid of ChatGPT and other AI tools may soon fade. After all, tools of convenience have ruthlessly killed other things--like the joy of sending and receiving a handwritten letter.

So to protect our humanity we have to proactively declare what we regard as fundamentally human endeavors and fend off the urge to outsource those endeavors to tools of convenience.

This humble (and short) column will not try to list those endeavors. My hope is instead to start a conversation about the spaces we want to remain AI free--or at least to the fullest extent possible. Given the significance of the upcoming 2024 election, I think starting that conversation on the use of AI tools in democratic activities makes a lot of sense.

Should, for example, candidates be able to use AI chatbots to impersonate them? If so, should they have to provide a disclaimer that the bot is, in fact, not the candidate? May political parties release ads informed by AI tools to appeal specifically to you based on the mountains of data it has compiled about you?

I know my answers to these questions, but I want to know yours. We need to debate what makes us…well…us, if we are going to have any chance of developing norms, regulations, and laws that shield fundamental human endeavors from the dangers of convenience. What would you declare "AI Exclusionary Zones" and why? Such zones may seem like an odd thing to discuss but if we don't shield it, convenience will conquer.


Read More

Election Officials Have Been Preparing for AI Cyberattacks

People voting at a polling station

Brett Carlsen/Getty

Election Officials Have Been Preparing for AI Cyberattacks

Since ChatGPT and other generative artificial intelligence systems first became widely available, the Brennan Center and other experts have warned that this technology may lead to more cyberattacks on elections and other critical infrastructure. Reports that Anthropic’s new AI model, Claude Mythos, can pinpoint software vulnerabilities that even the most experienced human experts would miss underline the urgency of those risks. Fortunately, election officials have been preparing for cyberattacks and have made significant progress in securing their systems over the past decade, incorporating improved cybersecurity practices at every step of the election process.

Anthropic claims that its new model can autonomously scan for vulnerabilities in software more effectively than even expert security researchers. If given access to this new model, amateurs would theoretically be capable of identifying and exploiting vulnerabilities in a way that previously only sophisticated actors, such as nation-states, could do. For this reason, Anthropic chose not to release the Mythos model publicly. Instead, under an initiative Anthropic is calling Project Glasswing, it has offered access to Mythos to a number of high-profile tech firms and critical infrastructure operators so that these companies can proactively identify and address vulnerabilities in their own systems. Although Anthropic is currently controlling access to its model to prevent misuse, experts believe it is only a matter of time before tools advertising similar capabilities are broadly available.

Keep ReadingShow less
2026 Brennan Legacy Awards Celebrate Champions of Democracy

Superhero revealing American flag

BrianAJackson/Getty Images

2026 Brennan Legacy Awards Celebrate Champions of Democracy

The founders of our 18th‑century republic were acutely aware of how fragile their experiment in self‑government might prove, and one can easily imagine them welcoming a modern guardian like the Brennan Center for Justice. Within the wide canopy of organizations devoted to defending our democracy, the Center has emerged as a rare and unmistakable jewel.

For over 20 years, the Center has been dedicated to defending our democratic institutions and the rule of law, while protecting our civil liberties in the face of mounting authoritarian winds.

Keep ReadingShow less
Lessons Learned from “Lullabies from the Axis of Evil”

Residents sit amid debris in a residential building that was hit in an airstrike earlier this morning on March 30, 2026 in the west of Tehran, Iran.

(Photo by Majid Saeedi/Getty Images)

Lessons Learned from “Lullabies from the Axis of Evil”

There has been much commentary on the dark side of President Trump’s character and the lack of leadership at other high levels of government. These events and the American president's statements should not go unchallenged. His efforts to dehumanize an opponent and trivialize bombing campaigns as they are part of a video game are unfathomable and inconsistent with most of American history. We must never forget that America is killing people, many innocent civilians, with apparently little remorse.

The war in Iran has brought back a memory from when my son was born nearly 20 years ago. A friend of my wife’s, an anthropologist and college professor, sent us a baby gift. It was a CD of music titled “Lullabies from the Axis of Evil.” The term “Axis of Evil” was first used in President George W. Bush’s 2002 State of the Union speech. He was referring to three countries that make up the axis: Iraq, Iran, and North Korea. Putting aside, for the moment, our complicated relationship with those three countries, the lullabies CD reminds us that, despite our geopolitical differences, these countries are home to human beings. They work, love, eat, drink, and practice religion as we do – and they sing lullabies to their babies.

Keep ReadingShow less
Beyond the Politics: The Human Cost Behind the Israel–Iran Conflict

An Israeli and US flag is seen near the border with Southern Lebanon, as seen from a position on the Israeli side of the border on April 29, 2026 in Northern Israel, Israel.

(Photo by Amir Levy/Getty Images)