On April 24, America got a wake-up call from Anthropic, one of the nation’s leading artificial intelligence companies. It announced a new AI tool, called Mythos, that can identify flaws in computer networks and software systems that, as Politico puts it, “Even the brightest human minds have been unable to identify.”
A machine smarter than the “brightest human minds” sounds like a line from a dystopian science fiction movie. And if that weren’t scary enough, we now have a government populated by people who seem oblivious to the risks AI poses to democracy and humanity itself.
Until now, the Trump Administration has been determined to let its tech bro supporters do whatever they want and to blackball AI companies like Anthropic that aren’t willing to let the administration use their AI tools however it wants. The release of Mythos makes the need for smart AI regulation more urgent than ever.
The administration seems to lack the will or capacity to provide what is needed. If the administration won’t provide it, Congress must.
America cannot afford to bury its head in the sand as the developments in AI proceed at warp speed, and neither can the rest of the world. As the American Civil Liberties Union explained last February, those developments have “begun to demolish the foundations for ensuring that artificial intelligence (AI) in the U.S. is safe and responsible. The president is not only set to completely roll back the fledgling protections Joe Biden’s administration instituted, but also to further accelerate the spread of unchecked AI across American life.”
Maybe Mythos has gotten through to the Trump Administration, which has previously sought to make Anthropic pay for its refusal to knuckle under to its outrageous desire to use AI irresponsibly. Recall that the Department of Defense retaliated by declaring Anthropic “a Supply-Chain Risk to National Security” and that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
For the first time, that designation, which had been applied only to foreign companies, was directed at one based in the United States. The government did so when the company refused to go along when the Department of Defense demanded that Anthropic “not prevent its technology from being used both for domestic mass surveillance of Americans and for fully autonomous lethal weapons.”
At the time, Dario Amodei, Anthropic CEO, pointed out that “AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.” He went on to say that “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties.”
He explained that “to the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI…. Powerful AI makes it possible to assemble… scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.”
Not surprisingly, the Trump Administration was not moved by Amodei’s appeal to democratic values or fundamental liberties. In fact, the president upped the ante when he posted to Truth Social the following: “THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War….”
“Therefore,” Trump continued, “I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! …WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about.”
As CNBC notes, “Before the conflict erupted in late February, Anthropic was one of the first AI companies to partner with many federal agencies as the government sought to rapidly upgrade its systems and capabilities with cutting-edge AI tech.”
In March, Rita Lin, a federal judge in San Francisco, granted Anthropic’s request for a temporary restraining order against the government. But she did more.
Lin called out the administration for punishing Anthropic because it “criticiz(ed) the government’s contracting position in the press. … Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.” She wanted no part of “the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”
Now it seems the Trump Administration is willing to deal with Anthropic’s “nut jobs.” Amodei was even invited to the White House to meet with Chief of Staff Susie Wiles on April 17.
That is one indication that, as a former White House AI advisor told the Washington Post, "'Mythos has activated a lot of people in D.C.‘…For the Trump administration.” The Post adds, “The arrival of Mythos has led to a reckoning with some of the technology’s potential downsides.”
Those downsides include the threat AI poses to democracy and rights and the capacity of democratic governance to keep up with its rapid development. AI can threaten democracy and rights in a variety of ways.
One, its use as a surveillance tool, was highlighted in the dispute between the Defense Department and Anthropic. But there are others.
AI’s capacity to produce convincing “deepfake” content may turn the world of campaigning and electioneering on its head. It can be used to convince voters that one or another political leader or party has said or done something reprehensible.
Combined with social media’s capacity to rapidly amplify such content, we will increasingly live in a world where seeing is not believing. And the Westminster Foundation for Democracy warns, “Deep fakes also enable a ‘liar’s dividend’ whereby political actors claim real content is fake to avoid sanction.”
AI may also be used by government officials to make consequential decisions. We have yet to think about what democratic accountability means in that situation.
The list could go on, but these examples illustrate a few ways that AI may jeopardize democratic processes. And it is not clear that democracies will be adequate to the challenge of adapting to those challenges.
Constitutional democracy in the United States is, even at its best, notoriously slow. And when government is paralyzed by partisan rancor as ours is, what scholars call the “pacing problem” just gets worse.
Simply put, the pacing problem “refers to technological advances outpacing laws and regulations.”
Arizona State University Law Professor Gary Merchant is right to say that “it is in the common interest of all concerned, including government, industry, civil society, and the general public, to try to prevent proactively significant harms from emerging technologies which will…be contrary to public well-being… ‘Rapid change demands foresight, vision, adaptability, and creativity, all combined with a healthy degree of prudence.’”
Those are not words that one would generally associate with the Trump Administration. Even as it may now realize it can’t ignore Anthropic or the growing potency of AI, it seems clear that the president would rather post than think more about AI.
That is a luxury that neither he nor we can afford. The fate of democracy hangs in the balance.
Austin Sarat is the William Nelson Cromwell professor of jurisprudence and political science at Amherst College.




















