WASHINGTON—A bipartisan bill that would ban minors from using AI companions, require all chatbots to verify a user’s age, and allow AI companies to be prosecuted for harming children was unanimously advanced to the Senate floor Wednesday by the Senate Judiciary Committee.
Sen. Josh Hawley, R-Mo. introduced “the Guidelines for User Age-verification and Responsible Dialogue Act,” (GUARD Act) in October as the Senate’s response to the rise in cases of children being groomed and driven to commit suicide by chatbots designed to replicate human interactions known as AI companions.
“This has got to stop,” Hawley said after sharing testimonials from victims' families in attendance. “This should not happen in the United States of America. No amount of profit justifies the destruction of our families and our children.”
Since 2023, instances of minors committing suicide at the behest of AI companions and chatbots have increasingly drawn national headlines and lawsuits from victims' families against popular AI companion platforms like Character.AI are beginning to pile up in courts across the country.
The bill, which was passed by a 22-0 vote, differentiated AI companions from chatbots like ChatGPT, which has also been accused of influencing children to harm themselves.
Specifically, the act would institute a blanket ban on minors using any AI companion platform. Full stop. Chatbots would not be fully banned for kids.
The bill also would require AI companies to verify users’ ages and disclose their non-human status to users. The bill would impose criminal and civil penalties on companies that violate its terms.
Sen. Alejandro Padilla, D-Calif., chimed in to support the bill but raised concerns about users’ privacy during the age verification process.
“I just want to register some questions and concerns we have about potential privacy and security risks with the age verification component, and I think that’s one of the areas we can fine-tune,” he said.
Jennifer Huddleston, a senior fellow in technology policy at the Cato Institute, shared those concerns in an interview following the meeting with Medill News Service, but clarified her organization does not support or oppose legislation.
“This would require everyone who is using an AI chatbot to pass some sort of age verification, which typically involves biometrics or government IDs,” she said. “Therefore, age verification becomes a form of identity verification. One can easily imagine how this would have a significant impact on anonymous speech, which can be incredibly important for any number of reasons, whether it is people that are engaged in political discourse or political dissent, or whether it's someone who is perhaps asking about a sensitive medical issue.”
Immediately after voting to advance the GUARD Act, Sen. Ted Cruz (R-TX) called for revisions of the bill’s total ban on child chatbot access.
“I think there are applications where chatbots can be beneficial. In Texas, the Alpha School has produced extraordinary results using AI with kids,” Cruz said.
Cruz’s comments reflected a concern held by some policy analysts that the bill will hinder the next generation from mastering AI tools as AI becomes more ubiquitous in everyday life.
“We want to make sure that kids are okay in the space,” Aden Hizkias, associate policy director at Chamber of Progress, which opposed the GUARD Act, said in an interview. “But if a kid or a group of kids, or a generation, let's say, is unable to access these types of tools now as they progress exponentially, you're essentially cutting off a huge benefit for them and for the U.S. at large because there's an entire generation that's not going to have the skill set.”
Richard Blumenthal, D-Conn., who co-authored the GUARD Act, warned his fellow committee members to prepare for AI companies to lobby against the bill.
“Warning, we’re not done yet,” Blumenthal told the committee. “Others who have championed this kind of legislation know that (AI companies) will be relentless and tireless. Whatever they say publicly, they will be behind the scenes with armies of lawyers and lobbyists trying to fight us, mislead, and confuse.”
He added that the Tech companies would use core American principles to fight against the bill.
“We’re going to hear a lot about the First Amendment, free enterprise, and ‘trust us,’” he said.
Joel Thayer, a senior fellow for AI and emerging technology policy at the America First Policy Institute, has been researching the First Amendment implications of Congress regulating AI chatbots and said the government has a strong chance of defeating potential lawsuits from AI companies.
“I think that's where it's going to be a tough row to hoe,” he said.
He predicted that AI companies would try to emulate the First Amendment defense that was successful for social media companies. “I don’t think this is analogous at all, especially if you've engaged with the chatbot,” he said, “That’s the crux of it, that they have more of a diluted First Amendment speech interest here.”
The GUARD Act now awaits a debate on the Senate floor. The U.S. House of Representatives introduced a similar bill using the same name on Wednesday, the Guidelines for User Age Verification and Responsible Dialogue (GUARD) Act.
Wisdom Howell is a reporter for Medill News Service.



















