Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate

News

Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate
the letters are made up of different colors

This nonpartisan policy brief, written by an ACE fellow, is republished by The Fulcrum as part of our partnership with the Alliance for Civic Engagement and our NextGen initiative — elevating student voices, strengthening civic education, and helping readers better understand democracy and public policy.

Key takeaways

  • The U.S. has no national AI liability law. Instead, a patchwork of state laws has emerged which has resulted in legal protections being dependent on where an individual resides.
  • It’s often unclear who is legally responsible when AI causes harm. This gap leaves many people with no clear path to seek help.
  • In March 2026, the White House and Congress introduced major proposals to establish a federal standard, but there is significant disagreement about whether that standard should prioritize protecting innovation or protecting people harmed by AI systems.

Background: A Patchwork of State Laws

Without a national AI law, states have been filling in the gaps on their own. The result is an uneven landscape where a person’s legal protections depend entirely on which state they live in.


Colorado was among the first states to pass a broad AI accountability law. Its 2024 Colorado AI Act, set to take effect in 2026, targets “high risk” AI systems which are defined as tools that are used to make important decisions about people. Under the law, companies must test their AI tools for bias, notify people when AI played a role in a decision about them, and allow people to appeal those decisions.

New York passed the RAISE Act, which establishes standards for companies that use AI in high-stakes settings, including requirements to disclose AI use and assess the risk of harm before using an AI system in high-stakes areas like employment, housing, healthcare, or financial services. New York City separately passed a law in 2023 requiring employers to audit their AI hiring tools for bias and notify job applicants when such tools are used.

Other states including Utah, Washington, California, and Illinois have each passed or proposed their own transparency and accountability measures, often targeting specific uses of AI in healthcare, hiring, or in public benefits decisions.

The result is a confusing compliance system for companies operating across multiple states and for average citizens trying to understand their rights. A person fired by an algorithm in Colorado has different legal options than someone in the same situation in a state with no AI law at all. This patchwork is the central issue that federal lawmakers are now trying to solve.

Who Pays When AI Gets it Wrong?

Artificial intelligence is now being used to make decisions that directly affect people’s lives. When these systems make mistakes, the consequences can be serious and lasting.

However, under current law, it is often unclear who is legally responsible when AI causes harm. Three groups could potentially be held liable:

  • AI developers: The companies that build and train the underlying AI model
  • AI deployers: The businesses that purchase and use the AI to make real-world decisions
  • Users: The individuals who interact with or act on AI-generated results

Traditional liability law was designed for a world where a human being made the decision. When an AI system produces a biased hiring recommendation, a wrong medical diagnosis, or a faulty credit decision, it is not always clear which party in the chain is responsible. This legal gray zone means that people harmed by AI may have no clear path to getting help, and companies may have little incentive to make their systems safer before releasing them.

What Congress and the White House Are Proposing

With states passing laws that often contradict each other, pressure has been placed on the federal government to set a single national standard. In March 2026, two major proposals emerged within days of each other.

The White House National Policy Framework for AI was released on March 20, 2026, and it outlines the Trump administration’s vision for federal AI law. The framework takes a “light-touch” approach focused on protecting innovation. Key positions include:

  • The federal government should set one national AI standard that states must follow, though states would still be allowed to enforce their own laws protecting children, consumers, and public safety.
  • AI developers should not be held responsible when a third party misuses their tools.
  • Congress should avoid open-ended liability rules that could generate excessive lawsuits.
  • Rather than creating a new federal AI agency, oversight should go through existing agencies like the Federal Trade Commission (FTC), the Food and Drug Administration (FDA), and the Equal Employment Opportunity Commission (EEOC).

The TRUMP AMERICA AI Act, a draft bill released on March 18, 2026, takes a more detailed approach. Key proposals include:

  • Establish a federal “duty of care: for chatbot developers, meaning companies would be legally required to take reasonable steps to prevent their tools from causing harm.
  • Require annual third-party audits of high-risk AI systems.
  • Override most state AI laws with a unified federal standard.

The Debate

Those who favor limiting AI liability, including many in the Trump administration and many in the technology industry, argue that holding developers legally responsible for every possible way their tools are used would make building AI in the United States far too costly and legally risky. Supporters of this view also warn that pushing liability too far could drive AI development to other countries with fewer regulations, meaning Americans would end up using AI built with even less oversight than they would have had otherwise. On the question of state laws, those who want to limit AI liability also argue that a single, clear federal standard is better for everyday people than the current situation.

However, proponents of stronger AI liability argue that without real legal consequences, companies have little financial incentive to invest in making their systems safer before releasing them to the public. The profit motive, they argue, pushes developers to move fast. As a result, the American people are most likely to be harmed and have the least power to push back. Those who want strong AI liability also argue that most existing laws, including civil rights statutes and consumer protection rules, were written long before AI existed and were not designed to handle situations where a machine made the important decision. Courts struggle to apply these older frameworks to AI harms, leaving many people with no practical legal resource. Finally, this side expresses concern that a federal law designed to primarily limit liability could end up setting a weak national floor that overrides stronger protections that states like Colorado and Illinois have already put in place.

FAQ

What does “liability” mean?

  • Liability is a legal term meaning responsibility. If a company is “liable” for harmed caused by AI, it can be required to pay damages or face other legal consequences.

What is a “high-risk” AI system

  • High-risk AI systems are tools that make or assist in making major decisions about people’s lives, such as whether someone gets hired, qualifies for a loan, or receives a particular medical treatment. Several state laws, including Colorado’s, use this category to decide which AI tools need the most oversight.

What is federal preemption?

  • Federal preemption means a federal law overrides state laws that conflict with it.

Why can’t existing laws handle AI harms?

  • Most existing laws were written before AI existed and were designed for situations where a human being made the important decision. When an algorithm makes that decision instead, it is often unclear how to apply old rules.

Has any federal AI liability law passed yet?

  • As of April 2026, no comprehensive federal AI liability law has been enacted. The White House framework and the TRUMP AMERICAN AI Act are both proposals, not law. Congress is currently debating next steps.

Margaret Wakefield is an ACE fellow.

Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate was first published by ACE and republished with permission.


Read More

Teenager admiring electronic hobby robot.

Explore how China is overtaking the U.S. in the global innovation race, from electric vehicles to advanced research, and why America’s fragmented science policy, talent loss, and weak industrial strategy threaten its technological leadership.

Getty Images, Willie B. Thomas

America’s Greatest Geopolitical Blind Spot

The global hierarchy of innovation is undergoing a structural shift that Washington is dangerously slow to acknowledge. For decades, the prevailing narrative in the United States was that China was merely the "world’s factory"—a nation capable of mass-producing Western designs but inherently lacking the creative spark to invent its own. This assumption has been shattered. Today, Beijing is no longer playing catch-up; in sectors ranging from electric vehicles and next-generation nuclear power to hypersonic missiles, China is setting the pace.

The central challenge is that China has mastered the entire innovation ecosystem, while the United States has allowed its own to fracture. Innovation is not just about a "eureka" moment in a laboratory; it is a relay race that begins with basic scientific research, moves through the training of specialized talent, and ends with the large-scale commercialization of "hard tech." China is currently winning every leg of that race.

Keep Reading Show less
An illustration of a person standing alone on a platform and looking at speech bubbles.

A bold critique of modern democracy and rising authoritarian ideas, exploring how AI-powered swarm digital democracy could redefine participation and governance.

Getty Images, Andriy Onufriyenko

The Only Radical Move Forward: Swarm Digital Democracy

We are increasingly told that democracy has failed and that its time has passed. The evidence proffered is everywhere, we are told: Gridlock, captured institutions, performative elections, a public that senses, correctly, that its voice rarely translates into real power. Into this vacuum step dystopic movements like the Dark Enlightenment and harder strains of Right-wing populism, offering a stark diagnosis and an even starker cure: Abandon the illusion of popular rule and return to forms of authority that are decisive, hierarchical, and unapologetically exclusionary. They present themselves as bold, clear-eyed, rambunctious, alive, and willing to act where others hesitate. And all to save the world from itself.

But this framing depends on a sleight of hand: It assumes that what we have been living under is, in fact, democracy, and that its failures are the failures of democracy itself. That is the first mistake.

Keep Reading Show less
Judge's Gavel Hammer as a Symbol of Law and Order with Processor CPU AI Chip.

Elon Musk’s xAI company is challenging AI regulations in Colorado after losing in California, arguing that limits on artificial intelligence violate free speech. As Connecticut enforces its own AI law, this case could shape the future of AI regulation, corporate accountability, and constitutional rights in the United States.

Getty Images, Alexander Sikov

xAI Pushes Free Speech Theory Into New AI Lawsuits

Elon Musk's AI company, xAI, is on a legal road trip. After losing in California, it filed suit in Colorado asking a court to declare the state's artificial intelligence regulations unconstitutional. The argument is essentially the same one that already failed. Meet the new boss. Same as the old boss.

For Connecticut residents, this is not just the next state in the alphabet that has passed AI legislation. Connecticut was one of the first states in the nation to adopt an AI law, requiring companies to disclose when AI is being used in critical decisions like employment, housing, credit, or healthcare. That law is already drawing scrutiny from the technology industry. What xAI tried to do in California and now in Colorado is a preview of what we may face in Connecticut.

Keep Reading Show less
An illustration of orange-colored megaphones, one megaphone in the middle is red and facing the opposite direction of the others.

A growing crisis threatens U.S. public data. Experts warn disappearing federal datasets could undermine science, policy, and democracy—and outline a plan to protect them.

Getty Images, Richard Drury

America's Data Crisis: Saving Trusted Facts Is Essential to Democracy

In March 2026, more than a hundred information and data experts gathered in a converted Christian Science church to confront a problem most Americans never see, but that shapes nearly every public debate we have. The nonprofit Internet Archive convened this national Information Stewardship Forum at their San Francisco headquarters because something fundamental is breaking: the country’s shared foundation of facts.

For decades, the United States has relied on a vast ecosystem of federal data on health, climate, the economy, education, demographics, scientific research, and more. This data is the backbone of journalism, policymaking, scientific discovery, and public accountability. It is how we know whether the air is safe to breathe, whether unemployment is rising or falling, whether a new disease is spreading, or whether a community is being left behind.

Keep Reading Show less