This nonpartisan policy brief, written by an ACE fellow, is republished by The Fulcrum as part of our partnership with the Alliance for Civic Engagement and our NextGen initiative — elevating student voices, strengthening civic education, and helping readers better understand democracy and public policy.
Key takeaways
- The U.S. has no national AI liability law. Instead, a patchwork of state laws has emerged which has resulted in legal protections being dependent on where an individual resides.
- It’s often unclear who is legally responsible when AI causes harm. This gap leaves many people with no clear path to seek help.
- In March 2026, the White House and Congress introduced major proposals to establish a federal standard, but there is significant disagreement about whether that standard should prioritize protecting innovation or protecting people harmed by AI systems.
Background: A Patchwork of State Laws
Without a national AI law, states have been filling in the gaps on their own. The result is an uneven landscape where a person’s legal protections depend entirely on which state they live in.
Colorado was among the first states to pass a broad AI accountability law. Its 2024 Colorado AI Act, set to take effect in 2026, targets “high risk” AI systems which are defined as tools that are used to make important decisions about people. Under the law, companies must test their AI tools for bias, notify people when AI played a role in a decision about them, and allow people to appeal those decisions.
New York passed the RAISE Act, which establishes standards for companies that use AI in high-stakes settings, including requirements to disclose AI use and assess the risk of harm before using an AI system in high-stakes areas like employment, housing, healthcare, or financial services. New York City separately passed a law in 2023 requiring employers to audit their AI hiring tools for bias and notify job applicants when such tools are used.
Other states including Utah, Washington, California, and Illinois have each passed or proposed their own transparency and accountability measures, often targeting specific uses of AI in healthcare, hiring, or in public benefits decisions.
The result is a confusing compliance system for companies operating across multiple states and for average citizens trying to understand their rights. A person fired by an algorithm in Colorado has different legal options than someone in the same situation in a state with no AI law at all. This patchwork is the central issue that federal lawmakers are now trying to solve.
Who Pays When AI Gets it Wrong?
Artificial intelligence is now being used to make decisions that directly affect people’s lives. When these systems make mistakes, the consequences can be serious and lasting.
However, under current law, it is often unclear who is legally responsible when AI causes harm. Three groups could potentially be held liable:
- AI developers: The companies that build and train the underlying AI model
- AI deployers: The businesses that purchase and use the AI to make real-world decisions
- Users: The individuals who interact with or act on AI-generated results
Traditional liability law was designed for a world where a human being made the decision. When an AI system produces a biased hiring recommendation, a wrong medical diagnosis, or a faulty credit decision, it is not always clear which party in the chain is responsible. This legal gray zone means that people harmed by AI may have no clear path to getting help, and companies may have little incentive to make their systems safer before releasing them.
What Congress and the White House Are Proposing
With states passing laws that often contradict each other, pressure has been placed on the federal government to set a single national standard. In March 2026, two major proposals emerged within days of each other.
The White House National Policy Framework for AI was released on March 20, 2026, and it outlines the Trump administration’s vision for federal AI law. The framework takes a “light-touch” approach focused on protecting innovation. Key positions include:
- The federal government should set one national AI standard that states must follow, though states would still be allowed to enforce their own laws protecting children, consumers, and public safety.
- AI developers should not be held responsible when a third party misuses their tools.
- Congress should avoid open-ended liability rules that could generate excessive lawsuits.
- Rather than creating a new federal AI agency, oversight should go through existing agencies like the Federal Trade Commission (FTC), the Food and Drug Administration (FDA), and the Equal Employment Opportunity Commission (EEOC).
The TRUMP AMERICA AI Act, a draft bill released on March 18, 2026, takes a more detailed approach. Key proposals include:
- Establish a federal “duty of care: for chatbot developers, meaning companies would be legally required to take reasonable steps to prevent their tools from causing harm.
- Require annual third-party audits of high-risk AI systems.
- Override most state AI laws with a unified federal standard.
The Debate
Those who favor limiting AI liability, including many in the Trump administration and many in the technology industry, argue that holding developers legally responsible for every possible way their tools are used would make building AI in the United States far too costly and legally risky. Supporters of this view also warn that pushing liability too far could drive AI development to other countries with fewer regulations, meaning Americans would end up using AI built with even less oversight than they would have had otherwise. On the question of state laws, those who want to limit AI liability also argue that a single, clear federal standard is better for everyday people than the current situation.
However, proponents of stronger AI liability argue that without real legal consequences, companies have little financial incentive to invest in making their systems safer before releasing them to the public. The profit motive, they argue, pushes developers to move fast. As a result, the American people are most likely to be harmed and have the least power to push back. Those who want strong AI liability also argue that most existing laws, including civil rights statutes and consumer protection rules, were written long before AI existed and were not designed to handle situations where a machine made the important decision. Courts struggle to apply these older frameworks to AI harms, leaving many people with no practical legal resource. Finally, this side expresses concern that a federal law designed to primarily limit liability could end up setting a weak national floor that overrides stronger protections that states like Colorado and Illinois have already put in place.
FAQ
What does “liability” mean?
- Liability is a legal term meaning responsibility. If a company is “liable” for harmed caused by AI, it can be required to pay damages or face other legal consequences.
What is a “high-risk” AI system
- High-risk AI systems are tools that make or assist in making major decisions about people’s lives, such as whether someone gets hired, qualifies for a loan, or receives a particular medical treatment. Several state laws, including Colorado’s, use this category to decide which AI tools need the most oversight.
What is federal preemption?
- Federal preemption means a federal law overrides state laws that conflict with it.
Why can’t existing laws handle AI harms?
- Most existing laws were written before AI existed and were designed for situations where a human being made the important decision. When an algorithm makes that decision instead, it is often unclear how to apply old rules.
Has any federal AI liability law passed yet?
- As of April 2026, no comprehensive federal AI liability law has been enacted. The White House framework and the TRUMP AMERICAN AI Act are both proposals, not law. Congress is currently debating next steps.
Margaret Wakefield is an ACE fellow.
Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate was first published by ACE and republished with permission.



















