Skip to content
Search

Latest Stories

Follow Us:
Top Stories

In Defense of ‘AI Mark’

Opinion

An illustration of AI chat boxes.

An illustration of AI chat boxes.

Getty Images, Andriy Onufriyenko

Earlier this week, a member of the UK Parliament—Mark Sewards—released an AI tool (named “AI Mark”) to assist with constituent inquiries. The public response was rapid and rage-filled. Some people demanded that the member of Parliament (MP) forfeit part of his salary—he's doing less work, right? Others called for his resignation—they didn't vote for AI; they voted for him! Many more simply questioned his thinking—why on earth did he think outsourcing such sensitive tasks to AI would be greeted with applause?

He's not the only elected official under fire for AI use. The Prime Minister of Sweden, Ulf Kristersson, recently admitted to using AI to study various proposals before casting votes. Swedes, like the Brits, have bombarded Kristersson with howls of outrage.


I'll bite and attempt to defend “AI Mark” specifically and the use of AI by elected officials more generally.

Let's start with “AI Mark.” While I understand public frustration around the seemingly hasty adoption of AI into a key government service, my research suggests that those ready to remove Sewards from office are failing to ask a key question: what's the alternative?

"AI Mark" was designed specifically to make up for Sewards' inability to meaningfully respond to manifold constituent inquiries. According to Sewards, he has "tried [his] best to sit at [his] desk and answer all the requests that come through on [his] laptop, but it’s not possible for one person to do that." "AI Mark,” on the other hand, can analyze such requests around the clock. That said, constituents want more than merely to be heard (or read); they're reaching out for some affirmative action by the MP. So, can "AI Mark" help with that?

My hunch is yes. Constituent work is hard. It's arguably the most important role for elected officials. Yet, it's also one of the least appreciated and one of the hardest to do well. Done right, constituent services performs at least three functions: first, it ensures individuals can get through complex bureaucracies; second, it surfaces emerging issues that warrant broader attention; and, third, it directs the elected official to prioritize issues that are most relevant to their communities.

Speaking from my experience as a former intern to a U.S. Senator, I can testify to the fact that "AI Mark" is likely an improvement upon the alternative of either a small army of undergraduate interns pouring over those constituent requests or the elected official themself attempting to do so.

Finding substantive constituent inquiries is no easy task. For every one person reaching out for support on a substantive matter, there are likely dozens, if not hundreds or thousands, of duplicative or irrelevant submissions. Hundreds of people may send identical letters urging a vote on a certain issue—a human is not necessary to read each of those; AI can quickly consolidate such letters. Other submissions may involve demands that exceed the authority of the office. There's little need for a human to confirm that the MP, senator, or representative does not have jurisdiction over that request. AI can do that with a high degree of accuracy in a fraction of the time. AI can then quickly filter through the flood of requests that likely do not merit much attention. The elected official and their staff can use that saved time to more promptly take action on the remainder. That's a win for everyone.

With respect to PM Kristersson and the use of AI for research, a similar defense can be raised. Elected officials are often short on time to do research on every issue that comes before their desk. In some cases, they will get a briefing from their staff explaining the pros and cons of that decision. Such analysis may not be high-quality. There’s the possibility that the staffer thinks they know how the official wants to vote or should vote and, therefore, biases their report. There’s also high odds of that staffer being taxed for their time themselves and, consequently, producing an incomplete or inaccurate report. Finally, there’s the possibility of the staffer using AI to do the task! Sophisticated AI tools such as OpenAI’s DeepResearch can scour the internet for relevant sources and information in a matter of minutes; it would be strange if a policy researcher failed to make use of this tool to supplement their analysis. What’s the harm of the PM simply skipping to this final step?

The harm in this case, as well as the case of "AI Mark," is a lack of transparency and engagement. Clandestine use of AI is almost always going to incite public unrest. Folks like to know how and why their elected officials are working on their behalf. The answer, however, is not to prevent or oppose the use of AI in policymaking but rather to make sure such use is out in the open and subject to regular review.


Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

A U.S. flag flying before congress. Visual representation of technology, a glitch, artificial intelligence
As AI reshapes jobs and politics, America faces a choice: resist automation or embrace innovation. The path to prosperity lies in AI literacy and adaptability.
Getty Images, Douglas Rissing

Why Should I Be Worried About AI?

For many people, the current anxiety about artificial intelligence feels overblown. They say, “We’ve been here before.” Every generation has its technological scare story. In the early days of automation, factories threatened jobs. Television was supposed to rot our brains. The internet was going to end serious thinking. Kurt Vonnegut’s Player Piano, published in 1952, imagined a world run by machines and technocrats, leaving ordinary humans purposeless and sidelined. We survived all of that.

So when people today warn that AI is different — that it poses risks to democracy, work, truth, our ability to make informed and independent choices — it’s reasonable to ask: Why should I care?

Keep ReadingShow less
A person on their phone, using a type of artificial intelligence.

AI-generated “nudification” is no longer a distant threat—it’s harming students now. As deepfake pornography spreads in schools nationwide, educators are left to confront a growing crisis that outpaces laws, platforms, and parental awareness.

Getty Images, d3sign

How AI Deepfakes in Classrooms Expose a Crisis of Accountability and Civic Trust

While public outrage flares when AI tools like Elon Musk’s Grok generate sexualized images of adults on X—often without consent—schools have been dealing with this harm for years. For school-aged children, AI-generated “nudification” is not a future threat or an abstract tech concern; it is already shaping their daily lives.

Last month, that reality became impossible to ignore in Lafourche Parish, Louisiana. A father sued the school district after several middle school boys circulated AI-generated pornographic images of eight female classmates, including his 13-year-old daughter. When the girl confronted one of the boys and punched him on a school bus, she was expelled. The boy who helped create and spread the images faced no formal consequences.

Keep ReadingShow less
Democracies Don’t Collapse in Silence; They Collapse When Truth Is Distorted or Denied
a remote control sitting in front of a television
Photo by Pinho . on Unsplash

Democracies Don’t Collapse in Silence; They Collapse When Truth Is Distorted or Denied

Even with the full protection of the First Amendment, the free press in America is at risk. When a president works tirelessly to silence journalists, the question becomes unavoidable: What truth is he trying to keep the country from seeing? What is he covering up or trying to hide?

Democracies rarely fall in a single moment; they erode through a thousand small silences that go unchallenged. When citizens can no longer see or hear the truth — or when leaders manipulate what the public is allowed to know — the foundation of self‑government begins to crack long before the structure falls. When truth becomes negotiable, democracy becomes vulnerable — not because citizens stop caring, but because they stop receiving the information they need to act.

Keep ReadingShow less
A close up of a person's hands typing on a laptop.

As AI reshapes the labor market, workers must think like entrepreneurs. Explore skills gaps, apprenticeships, and policy reforms shaping the future of work.

Getty Images, Maria Korneeva

We’re All Entrepreneurs Now: Learning, Pivoting, and Thriving the Age of AI

What do a recent grad, a disenchanted employee, and a parent returning to the workforce all have in common? They’re each trying to determine which skills are in demand and how they can convince employers that they are competent in those fields. This is easier said than done.

Recent grads point to transcripts lined with As to persuade firms that they can add value. Firms, well aware of grade inflation, may scoff.

Keep ReadingShow less