Skip to content
Search

Latest Stories

Follow Us:
Top Stories

xAI Pushes Free Speech Theory Into New AI Lawsuits

xAI’s push to give chatbots constitutional rights falters as states defend AI rules.

Opinion

Judge's Gavel Hammer as a Symbol of Law and Order with Processor CPU AI Chip.

Elon Musk’s xAI company is challenging AI regulations in Colorado after losing in California, arguing that limits on artificial intelligence violate free speech. As Connecticut enforces its own AI law, this case could shape the future of AI regulation, corporate accountability, and constitutional rights in the United States.

Getty Images, Alexander Sikov

Elon Musk's AI company, xAI, is on a legal road trip. After losing in California, it filed suit in Colorado asking a court to declare the state's artificial intelligence regulations unconstitutional. The argument is essentially the same one that already failed. Meet the new boss. Same as the old boss.

For Connecticut residents, this is not just the next state in the alphabet that has passed AI legislation. Connecticut was one of the first states in the nation to adopt an AI law, requiring companies to disclose when AI is being used in critical decisions like employment, housing, credit, or healthcare. That law is already drawing scrutiny from the technology industry. What xAI tried to do in California and now in Colorado is a preview of what we may face in Connecticut.


Echoing its unsuccessful claim in the Colorado suit, xAI alleges that regulating AI violates freedom of speech. They argue that requiring AI systems to avoid discriminatory outputs is essentially the state engaging in unlawful discrimination. While it’s a creative position, it is wrong.

At the heart of this case is an attempt to give a software program constitutional rights that exclusively belong to human beings. xAI wants a court to treat Grok, a chatbot that runs on algorithms and training data, as the legal equivalent of a human being with First Amendment protections. Whether one applies a textualist or pragmatic approach to constitutional interpretation, knighting software as humans is so far from what the Framers envisioned that it simply does not compute. The court in California didn’t buy xAI’s argument, and the Colorado court likely won’t either. We should gird ourselves in Connecticut for xAI to come for us also.

Just as Grok was created by humans, so was your smart oven. It also runs on software and algorithms. If it overrides your temperature setting and burns your dinner, has it exercised a constitutional right? If its algorithm sets the kitchen on fire and burns the house down, is that protected expression? Or is it simply a dangerous product that needs to be regulated?

We regulate dangerous products like pharmaceuticals, cars, medical devices, and financial instruments, not to silence anyone, but because products affect real people, and real people deserve protection. Software, not so much. When an AI model produces discriminatory outputs by denying someone a loan, misidentifying a face that leads to arrest, or steering housing opportunities away from protected groups, those outputs cause real harm to real people. Connecticut lawmakers understood that when they passed our AI law. Preventing that harm is not censorship, and it certainly doesn’t deny anyone’s right to free speech. It is the government doing exactly what governments exist to do.

The First Amendment protects people and, to some extent, corporations. But those rights do not extend to the products they make. A car company can lobby against safety regulations, but the car cannot claim a right to run red lights. xAI can argue in every courtroom in America that AI regulation is bad policy. They’re entitled to their opinion, and we can applaud their right to express it and give them their day in court. But a chatbot does not have a constitutional right to generate discriminatory content without government oversight.

This litigation isn’t really about free speech. It is about money. Meaningful AI regulation requires companies to be transparent about how their systems are trained, what data they use, and what their models actually do. That slows deployment, which translates into lost profit potential. Calling regulation censorship is a strategy for avoiding those obligations, not a principled stand for civil liberties.

A spoon does not have rights. A range does not have rights. A chatbot does not have rights. Connecticut residents affected by what those tools produce is a different matter entirely.


Monique Mattei Ferraro is a Watertown-based cybersecurity and privacy attorney and Adjunct Professor at Albany Law School Computer Crimes and Electronic Evidence Lab, teaches cyber law, and is a regional co-leader for Lawyers Defending American Democracy’s Meeting the Moment initiative in New England. Her work focuses on cybersecurity, privacy and artificial intelligence.


Read More

An illustration of orange-colored megaphones, one megaphone in the middle is red and facing the opposite direction of the others.

A growing crisis threatens U.S. public data. Experts warn disappearing federal datasets could undermine science, policy, and democracy—and outline a plan to protect them.

Getty Images, Richard Drury

America's Data Crisis: Saving Trusted Facts Is Essential to Democracy

In March 2026, more than a hundred information and data experts gathered in a converted Christian Science church to confront a problem most Americans never see, but that shapes nearly every public debate we have. The nonprofit Internet Archive convened this national Information Stewardship Forum at their San Francisco headquarters because something fundamental is breaking: the country’s shared foundation of facts.

For decades, the United States has relied on a vast ecosystem of federal data on health, climate, the economy, education, demographics, scientific research, and more. This data is the backbone of journalism, policymaking, scientific discovery, and public accountability. It is how we know whether the air is safe to breathe, whether unemployment is rising or falling, whether a new disease is spreading, or whether a community is being left behind.

Keep Reading Show less
Man lying in his bed, on his phone at night.

As the 2026 election approaches, doomscrolling and social media are shaping voter behavior through fear and anxiety. Learn how digital news consumption influences political decisions—and how to break the cycle for more informed voting.

Getty Images, gorodenkoff

Americans Are Doomscrolling Their Way to the Ballot Box and Only Getting Empty Promises

As the spring primary cycle ramps up, voters are deciding which candidates to elect in the November general election, but too much doomscrolling on social media is leading to uninformed — and often anxiety-based — voting. Even though online platforms and politicians may be preying on our exhaustion to further their agendas, we don’t have to fall for it this election cycle.

Doomscrolling is, unfortunately, part of daily life for many of us. It involves consuming a virtually endless amount of negative social media posts and news content, causing us to feel scared and depressed. Our brains have a hardwired negativity bias that causes us to notice potential threats and focus on them. This is exacerbated by the fact that people who closely follow or participate in politics are more likely to doomscroll.

Keep Reading Show less
The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep Reading Show less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep Reading Show less