Skip to content
Search

Latest Stories

Follow Us:
Top Stories

In Defense of ‘AI Mark’

Opinion

An illustration of AI chat boxes.

An illustration of AI chat boxes.

Getty Images, Andriy Onufriyenko

Earlier this week, a member of the UK Parliament—Mark Sewards—released an AI tool (named “AI Mark”) to assist with constituent inquiries. The public response was rapid and rage-filled. Some people demanded that the member of Parliament (MP) forfeit part of his salary—he's doing less work, right? Others called for his resignation—they didn't vote for AI; they voted for him! Many more simply questioned his thinking—why on earth did he think outsourcing such sensitive tasks to AI would be greeted with applause?

He's not the only elected official under fire for AI use. The Prime Minister of Sweden, Ulf Kristersson, recently admitted to using AI to study various proposals before casting votes. Swedes, like the Brits, have bombarded Kristersson with howls of outrage.


I'll bite and attempt to defend “AI Mark” specifically and the use of AI by elected officials more generally.

Let's start with “AI Mark.” While I understand public frustration around the seemingly hasty adoption of AI into a key government service, my research suggests that those ready to remove Sewards from office are failing to ask a key question: what's the alternative?

"AI Mark" was designed specifically to make up for Sewards' inability to meaningfully respond to manifold constituent inquiries. According to Sewards, he has "tried [his] best to sit at [his] desk and answer all the requests that come through on [his] laptop, but it’s not possible for one person to do that." "AI Mark,” on the other hand, can analyze such requests around the clock. That said, constituents want more than merely to be heard (or read); they're reaching out for some affirmative action by the MP. So, can "AI Mark" help with that?

My hunch is yes. Constituent work is hard. It's arguably the most important role for elected officials. Yet, it's also one of the least appreciated and one of the hardest to do well. Done right, constituent services performs at least three functions: first, it ensures individuals can get through complex bureaucracies; second, it surfaces emerging issues that warrant broader attention; and, third, it directs the elected official to prioritize issues that are most relevant to their communities.

Speaking from my experience as a former intern to a U.S. Senator, I can testify to the fact that "AI Mark" is likely an improvement upon the alternative of either a small army of undergraduate interns pouring over those constituent requests or the elected official themself attempting to do so.

Finding substantive constituent inquiries is no easy task. For every one person reaching out for support on a substantive matter, there are likely dozens, if not hundreds or thousands, of duplicative or irrelevant submissions. Hundreds of people may send identical letters urging a vote on a certain issue—a human is not necessary to read each of those; AI can quickly consolidate such letters. Other submissions may involve demands that exceed the authority of the office. There's little need for a human to confirm that the MP, senator, or representative does not have jurisdiction over that request. AI can do that with a high degree of accuracy in a fraction of the time. AI can then quickly filter through the flood of requests that likely do not merit much attention. The elected official and their staff can use that saved time to more promptly take action on the remainder. That's a win for everyone.

With respect to PM Kristersson and the use of AI for research, a similar defense can be raised. Elected officials are often short on time to do research on every issue that comes before their desk. In some cases, they will get a briefing from their staff explaining the pros and cons of that decision. Such analysis may not be high-quality. There’s the possibility that the staffer thinks they know how the official wants to vote or should vote and, therefore, biases their report. There’s also high odds of that staffer being taxed for their time themselves and, consequently, producing an incomplete or inaccurate report. Finally, there’s the possibility of the staffer using AI to do the task! Sophisticated AI tools such as OpenAI’s DeepResearch can scour the internet for relevant sources and information in a matter of minutes; it would be strange if a policy researcher failed to make use of this tool to supplement their analysis. What’s the harm of the PM simply skipping to this final step?

The harm in this case, as well as the case of "AI Mark," is a lack of transparency and engagement. Clandestine use of AI is almost always going to incite public unrest. Folks like to know how and why their elected officials are working on their behalf. The answer, however, is not to prevent or oppose the use of AI in policymaking but rather to make sure such use is out in the open and subject to regular review.


Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

Read More

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal
Getty Images, Kmatta

New Cybersecurity Rules for Healthcare? Understanding HHS’s HIPPA Proposal

Background

The Health Insurance Portability and Accountability Act (HIPAA) was enacted in 1996 to protect sensitive health information from being disclosed without patients’ consent. Under this act, a patient’s privacy is safeguarded through the enforcement of strict standards on managing, transmitting, and storing health information.

Keep ReadingShow less
Two people looking at screens.

A case for optimism, risk-taking, and policy experimentation in the age of AI—and why pessimism threatens technological progress.

Getty Images, Andriy Onufriyenko

In Defense of AI Optimism

Society needs people to take risks. Entrepreneurs who bet on themselves create new jobs. Institutions that gamble with new processes find out best to integrate advances into modern life. Regulators who accept potential backlash by launching policy experiments give us a chance to devise laws that are based on evidence, not fear.

The need for risk taking is all the more important when society is presented with new technologies. When new tech arrives on the scene, defense of the status quo is the easier path--individually, institutionally, and societally. We are all predisposed to think that the calamities, ailments, and flaws we experience today--as bad as they may be--are preferable to the unknowns tied to tomorrow.

Keep ReadingShow less
Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump with Secretary of State Marco Rubio, left, and Secretary of Defense Pete Hegseth

Tasos Katopodis/Getty Images

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon’s cloud computing systems.

The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department’s computer systems for nearly a decade — a practice that left some of the country’s most sensitive data vulnerable to hacking from its leading cyber adversary.

Keep ReadingShow less
Someone using an AI chatbot on their phone.

AI-powered wellness tools promise care at work, but raise serious questions about consent, surveillance, and employee autonomy.

Getty Images, d3sign

Why Workplace Wellbeing AI Needs a New Ethics of Consent

Across the U.S. and globally, employers—including corporations, healthcare systems, universities, and nonprofits—are increasing investment in worker well-being. The global corporate wellness market reached $53.5 billion in sales in 2024, with North America leading adoption. Corporate wellness programs now use AI to monitor stress, track burnout risk, or recommend personalized interventions.

Vendors offering AI-enabled well-being platforms, chatbots, and stress-tracking tools are rapidly expanding. Chatbots such as Woebot and Wysa are increasingly integrated into workplace wellness programs.

Keep ReadingShow less