Skip to content
Search

Latest Stories

Follow Us:
Top Stories

AI and a marketplace of illusion and confusion

AI and a marketplace of illusion and confusion
Getty Images

Kevin Frazier is an Assistant Professor at the Crump College of Law at St. Thomas University. He previously clerked for the Montana Supreme Court.

The First Amendment protects a marketplace of ideas—ideally, speakers can freely offer information and the public audience can evaluate that information in light of other ideas, arguments, and proposals. This exchange has a clear goal: the maintenance of a deliberative democracy.


Content generated by AI will soon cause a catastrophic market failure, unless we act now to protect our ability to converse with and learn from one another. Two facts make that impending failure clear: first, in just three years, 90 percent of online content may be generated by AI; and, second, humans struggle --and will increasingly struggle as AI improves--to identify AI-generated speech.

The upshot is that our marketplace of ideas will soon be a marketplace of illusion and confusion. It’s time to establish a “Right to Reality.” Our main marketplaces– from Facebook to The New York Times --should have a legal obligation to label the extent to which content is altered by AI or “organic”--i.e., created by humans.

Though this Right to Reality may seem far fetched, it’s grounded in the core principles of the First Amendment. By way of example, the U.S. Supreme Court has held that there’s a right to receive information. Justice Brennan, writing for the plurality in Board of Education v. Pico, argued that "[t]he right of freedom of speech and press embraces the right to distribute literature, and necessarily protects the right to receive it. The dissemination of ideas can accomplish nothing if otherwise willing addresses are not free to receive and consider them."

In an information ecosystem polluted by altered content “willing addresses” lack that freedom. For one, it’s nearly impossible to “receive” organic information if it requires sorting through mountains of AI-generated mis- and disinformation. Second, even if one stumbled across organic information in that setting, they may not know it because of the increasing capacity of AI tools to mirror organic content.

Astute readers may contest the Right to Reality on the basis that the First Amendment under the Federal Constitution only protects against government interference. That argument has some weight--though, as an aside, the U.S. Supreme Court has recognized First Amendment rights in some settings involving private actors. Nonetheless, to the extent the federal First Amendment is bounded, there’s another legal home for the Right to Reality--state constitutions.

Many state constitutions have distinct freedom of speech provisions that have been interpreted to afford greater protections. Case in point, the New Jersey Supreme Court held that freedom of speech and assembly provisions under the state's constitution protected students distributing political leaflets at Princeton, a private university. The court explained that a limited private right of action may exist based on the typical use of the space, whether the public had been invited to use that space, and the purpose of the expressive activity in question. Courts in California, Pennsylvania, and beyond have reached similar conclusions.

There’s little denying that our modern public spheres, including social media platforms, fit the profile of a space that ought to be subject to regulation under such state constitutional speech provisions. Social media platforms are commonly and increasingly used to exchange political views and news, are designed to facilitate such exchange, and are generally open to the public.

The legal viability of the Right to Reality is also bolstered by its minimal impact on expressive activity. Unlike other provisions that have run afoul of freedom of speech protections, the Right to Reality would not remove any content from public forums but merely assist in the evaluation of that content. It’s also worth pointing out that the ability to evaluate the accuracy and origin of information serves several societal goals.

Our democracy cannot function if voters cannot confirm whether a candidate or a computer generated a message. Our children will struggle to mature into well-rounded citizens if they solely interact with altered content. Our collective capacity to challenge the status quo will collapse if we outsource our critical thinking to AI tools.

In short, it’s now or never for a right to reality.


Read More

Is the U.S. at "War" with Iran?

A woman sifts through the rubble in her house in the Beryanak District after it was damaged by missile attacks two days before, on March 15, 2026, in Tehran, Iran.

(Photo by Majid Saeedi/Getty Images)

Is the U.S. at "War" with Iran?

This question is not an exercise in double-talk. It is critical to understand the power that our Constitution grants exclusively to Congress, and the power that resides in the President as Commander-in-Chief of the military.

The Constitution clearly states that Congress has the power to declare war. The President does not have that power. The War Powers Resolution of 1973 recognizes that distribution of power by saying that a President can only introduce military force into an existing or imminent hostility if Congress has declared war or specifically authorized the President to use military force, or there is a national emergency created by an attack on the U.S.

Keep ReadingShow less
Healthcare Jobs Surge Mask a Productivity Crisis—and Rising Costs
person sitting while using laptop computer and green stethoscope near

Healthcare Jobs Surge Mask a Productivity Crisis—and Rising Costs

Healthcare and social assistance professions added 693,000 jobs in 2025. Without those gains, the U.S. economy would have lost roughly 570,000 jobs.

At first glance, these numbers suggest that healthcare is a growth engine in an otherwise slowing labor market. But a closer look reveals something more troubling for patients and healthcare professionals.

Keep ReadingShow less
A large group of people is depicted while invisible systems actively scan and analyze individuals within the crowd

Anthropic’s lawsuit against the Trump administration over a Pentagon “supply-chain risk” label raises major constitutional questions about AI policy, corporate speech, and political retaliation.

Getty Images, Flavio Coelho

Anthropic Sues Trump Over ‘Unlawful’ AI Retaliation

Anthropic’s dispute with the Trump administration is no longer just about AI policy; it has escalated into a constitutional test of whether American companies can uphold their values against political retaliation. After the administration labeled Anthropic a “supply‑chain risk”, a designation historically reserved for foreign adversaries, and ordered federal agencies to cease using its technology, the company did not yield. Instead, Anthropic filed two lawsuits: one in the Northern District of California and another in the D.C. Circuit, each challenging different aspects of the government’s actions and calling them “unprecedented and unlawful.”

The Pentagon has now formally issued the supply‑chain risk designation, triggering immediate cancellations of federal contracts and jeopardizing “hundreds of millions of dollars” in near‑term revenue. Anthropic’s filings describe the losses as “unrecoverable,” with reputational damage compounding the financial harm. Yet even as the government blacklists the company, the Pentagon continues using Claude in classified systems because the model is deeply embedded in wartime workflows. This contradiction underscores the political nature of the designation: a tool deemed too “dangerous” to be used by federal agencies is simultaneously indispensable in active military operations.

Keep ReadingShow less