Elon Musk's AI company, xAI, is on a legal road trip. After losing in California, it filed suit in Colorado asking a court to declare the state's artificial intelligence regulations unconstitutional. The argument is essentially the same one that already failed. Meet the new boss. Same as the old boss.
For Connecticut residents, this is not just the next state in the alphabet that has passed AI legislation. Connecticut was one of the first states in the nation to adopt an AI law, requiring companies to disclose when AI is being used in critical decisions like employment, housing, credit, or healthcare. That law is already drawing scrutiny from the technology industry. What xAI tried to do in California and now in Colorado is a preview of what we may face in Connecticut.
Echoing its unsuccessful claim in the Colorado suit, xAI alleges that regulating AI violates freedom of speech. They argue that requiring AI systems to avoid discriminatory outputs is essentially the state engaging in unlawful discrimination. While it’s a creative position, it is wrong.
At the heart of this case is an attempt to give a software program constitutional rights that exclusively belong to human beings. xAI wants a court to treat Grok, a chatbot that runs on algorithms and training data, as the legal equivalent of a human being with First Amendment protections. Whether one applies a textualist or pragmatic approach to constitutional interpretation, knighting software as humans is so far from what the Framers envisioned that it simply does not compute. The court in California didn’t buy xAI’s argument, and the Colorado court likely won’t either. We should gird ourselves in Connecticut for xAI to come for us also.
Just as Grok was created by humans, so was your smart oven. It also runs on software and algorithms. If it overrides your temperature setting and burns your dinner, has it exercised a constitutional right? If its algorithm sets the kitchen on fire and burns the house down, is that protected expression? Or is it simply a dangerous product that needs to be regulated?
We regulate dangerous products like pharmaceuticals, cars, medical devices, and financial instruments, not to silence anyone, but because products affect real people, and real people deserve protection. Software, not so much. When an AI model produces discriminatory outputs by denying someone a loan, misidentifying a face that leads to arrest, or steering housing opportunities away from protected groups, those outputs cause real harm to real people. Connecticut lawmakers understood that when they passed our AI law. Preventing that harm is not censorship, and it certainly doesn’t deny anyone’s right to free speech. It is the government doing exactly what governments exist to do.
The First Amendment protects people and, to some extent, corporations. But those rights do not extend to the products they make. A car company can lobby against safety regulations, but the car cannot claim a right to run red lights. xAI can argue in every courtroom in America that AI regulation is bad policy. They’re entitled to their opinion, and we can applaud their right to express it and give them their day in court. But a chatbot does not have a constitutional right to generate discriminatory content without government oversight.
This litigation isn’t really about free speech. It is about money. Meaningful AI regulation requires companies to be transparent about how their systems are trained, what data they use, and what their models actually do. That slows deployment, which translates into lost profit potential. Calling regulation censorship is a strategy for avoiding those obligations, not a principled stand for civil liberties.
A spoon does not have rights. A range does not have rights. A chatbot does not have rights. Connecticut residents affected by what those tools produce is a different matter entirely.
Monique Mattei Ferraro is a Watertown-based cybersecurity and privacy attorney and Adjunct Professor at Albany Law School Computer Crimes and Electronic Evidence Lab, teaches cyber law, and is a regional co-leader for Lawyers Defending American Democracy’s Meeting the Moment initiative in New England. Her work focuses on cybersecurity, privacy and artificial intelligence.



















