Winning just nine more House seats than Democrats in the 2022 midterms means the Republican caucus has very little room for error.
Site Navigation
Search
Latest Stories
Join a growing community committed to civic renewal.
Subscribe to The Fulcrum and be part of the conversation.
Top Stories
Latest news
Read More

the letters are made up of different colors
Photo by Steve A Johnson on Unsplash
Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate
May 07, 2026
This nonpartisan policy brief, written by an ACE fellow, is republished by The Fulcrum as part of our partnership with the Alliance for Civic Engagement and our NextGen initiative — elevating student voices, strengthening civic education, and helping readers better understand democracy and public policy.
Key takeaways
- The U.S. has no national AI liability law. Instead, a patchwork of state laws has emerged which has resulted in legal protections being dependent on where an individual resides.
- It’s often unclear who is legally responsible when AI causes harm. This gap leaves many people with no clear path to seek help.
- In March 2026, the White House and Congress introduced major proposals to establish a federal standard, but there is significant disagreement about whether that standard should prioritize protecting innovation or protecting people harmed by AI systems.
Background: A Patchwork of State Laws
Without a national AI law, states have been filling in the gaps on their own. The result is an uneven landscape where a person’s legal protections depend entirely on which state they live in.
Colorado was among the first states to pass a broad AI accountability law. Its 2024 Colorado AI Act, set to take effect in 2026, targets “high risk” AI systems which are defined as tools that are used to make important decisions about people. Under the law, companies must test their AI tools for bias, notify people when AI played a role in a decision about them, and allow people to appeal those decisions.
New York passed the RAISE Act, which establishes standards for companies that use AI in high-stakes settings, including requirements to disclose AI use and assess the risk of harm before using an AI system in high-stakes areas like employment, housing, healthcare, or financial services. New York City separately passed a law in 2023 requiring employers to audit their AI hiring tools for bias and notify job applicants when such tools are used.
Other states including Utah, Washington, California, and Illinois have each passed or proposed their own transparency and accountability measures, often targeting specific uses of AI in healthcare, hiring, or in public benefits decisions.
The result is a confusing compliance system for companies operating across multiple states and for average citizens trying to understand their rights. A person fired by an algorithm in Colorado has different legal options than someone in the same situation in a state with no AI law at all. This patchwork is the central issue that federal lawmakers are now trying to solve.
Who Pays When AI Gets it Wrong?
Artificial intelligence is now being used to make decisions that directly affect people’s lives. When these systems make mistakes, the consequences can be serious and lasting.
However, under current law, it is often unclear who is legally responsible when AI causes harm. Three groups could potentially be held liable:
- AI developers: The companies that build and train the underlying AI model
- AI deployers: The businesses that purchase and use the AI to make real-world decisions
- Users: The individuals who interact with or act on AI-generated results
Traditional liability law was designed for a world where a human being made the decision. When an AI system produces a biased hiring recommendation, a wrong medical diagnosis, or a faulty credit decision, it is not always clear which party in the chain is responsible. This legal gray zone means that people harmed by AI may have no clear path to getting help, and companies may have little incentive to make their systems safer before releasing them.
What Congress and the White House Are Proposing
With states passing laws that often contradict each other, pressure has been placed on the federal government to set a single national standard. In March 2026, two major proposals emerged within days of each other.
The White House National Policy Framework for AI was released on March 20, 2026, and it outlines the Trump administration’s vision for federal AI law. The framework takes a “light-touch” approach focused on protecting innovation. Key positions include:
- The federal government should set one national AI standard that states must follow, though states would still be allowed to enforce their own laws protecting children, consumers, and public safety.
- AI developers should not be held responsible when a third party misuses their tools.
- Congress should avoid open-ended liability rules that could generate excessive lawsuits.
- Rather than creating a new federal AI agency, oversight should go through existing agencies like the Federal Trade Commission (FTC), the Food and Drug Administration (FDA), and the Equal Employment Opportunity Commission (EEOC).
The TRUMP AMERICA AI Act, a draft bill released on March 18, 2026, takes a more detailed approach. Key proposals include:
- Establish a federal “duty of care: for chatbot developers, meaning companies would be legally required to take reasonable steps to prevent their tools from causing harm.
- Require annual third-party audits of high-risk AI systems.
- Override most state AI laws with a unified federal standard.
The Debate
Those who favor limiting AI liability, including many in the Trump administration and many in the technology industry, argue that holding developers legally responsible for every possible way their tools are used would make building AI in the United States far too costly and legally risky. Supporters of this view also warn that pushing liability too far could drive AI development to other countries with fewer regulations, meaning Americans would end up using AI built with even less oversight than they would have had otherwise. On the question of state laws, those who want to limit AI liability also argue that a single, clear federal standard is better for everyday people than the current situation.
However, proponents of stronger AI liability argue that without real legal consequences, companies have little financial incentive to invest in making their systems safer before releasing them to the public. The profit motive, they argue, pushes developers to move fast. As a result, the American people are most likely to be harmed and have the least power to push back. Those who want strong AI liability also argue that most existing laws, including civil rights statutes and consumer protection rules, were written long before AI existed and were not designed to handle situations where a machine made the important decision. Courts struggle to apply these older frameworks to AI harms, leaving many people with no practical legal resource. Finally, this side expresses concern that a federal law designed to primarily limit liability could end up setting a weak national floor that overrides stronger protections that states like Colorado and Illinois have already put in place.
FAQ
What does “liability” mean?
- Liability is a legal term meaning responsibility. If a company is “liable” for harmed caused by AI, it can be required to pay damages or face other legal consequences.
What is a “high-risk” AI system
- High-risk AI systems are tools that make or assist in making major decisions about people’s lives, such as whether someone gets hired, qualifies for a loan, or receives a particular medical treatment. Several state laws, including Colorado’s, use this category to decide which AI tools need the most oversight.
What is federal preemption?
- Federal preemption means a federal law overrides state laws that conflict with it.
Why can’t existing laws handle AI harms?
- Most existing laws were written before AI existed and were designed for situations where a human being made the important decision. When an algorithm makes that decision instead, it is often unclear how to apply old rules.
Has any federal AI liability law passed yet?
- As of April 2026, no comprehensive federal AI liability law has been enacted. The White House framework and the TRUMP AMERICAN AI Act are both proposals, not law. Congress is currently debating next steps.
Margaret Wakefield is an ACE fellow.
Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate was first published by ACE and republished with permission.
Keep ReadingShow less
Recommended

Relatives and friends mourn the death of 26-year-old Aoni Sami Hasaballah on May 5, 2026 in Gaza City, Gaza. Hasaballah was killed today during Israeli airstrikes in the Al-Zaytoun neighborhood.
(Photo by Ahmad Hasaballah/Getty Images)
‘We Refuse to Live in a State of War’: Inside the Israeli‑Palestinian Movements Imagining a Shared Future
May 06, 2026
Amid the political and military standoff among the United States, Israel, and Iran, it is civilians — the people with no say in these decisions — who bear the fear, disruption, and uncertainty of every strike and escalation. This week, The Fulcrum’s executive editor, Hugo Balta, reports from Israel with a single aim: to humanize the war by focusing not on the spectacle of Operation Epic Fury, but on the ordinary lives being reshaped by it.
TEL AVIV — As the escalating confrontation with Iran increasingly dominates regional and international attention, the war in Gaza has slipped from the global spotlight. But inside Israel, far from the televised images of missile interceptions and diplomatic brinkmanship, a different struggle is unfolding — one led not by governments or generals, but by grassroots movements insisting that Israelis and Palestinians can still imagine a shared future.
These groups, long dismissed as marginal or naïve, say the current moment has only sharpened their resolve. As Israel expands military operations and deepens its control inside Gaza, they argue that the absence of a political horizon is itself a threat to security — and that coexistence, not separation, is the only path out of perpetual conflict.
A counter‑narrative in the streets
“Security and cooperation are not contradictions.” For Nadav Oren, an organizer with the Arab‑Jewish movement Standing Together, the idea that safety requires walls, checkpoints, and permanent division is a false choice.
“The wish to feel safe from threats does not contradict cooperating with Palestinians,” Oren said in an interview with The Fulcrum. “We are representatives of both peoples fighting side by side against violence and for security from threats.”
Standing Together has become one of the most visible civil society forces in Israel since October, organizing demonstrations, mutual‑aid networks, and public campaigns calling for a ceasefire and a political solution. The group’s rallies have repeatedly faced police repression — a trend documented by Israeli and international media — but Oren says this has only strengthened public support.
“When an officer hits people holding signs, it’s clear who fights for love and who fights for hate and separation,” he said. “More and more people join the movement.”
For Oren, the deeper obstacle is psychological: the enforced separation that prevents Israelis and Palestinians from seeing one another as human.
“Most Israelis have never met Palestinians from Gaza or the West Bank,” he said. “Palestinians only meet Israelis who prevent them from moving around their own land. It’s almost impossible to view people in a humane way when you only see them hurting you.”
A summit that defied despair
That belief — that human contact can still shift political reality — was at the heart of last week’s People’s Peace Summit in Tel Aviv, a rare joint gathering of Israeli and Palestinian peace activists. The event brought together Standing Together, Women Wage Peace, bereaved families, coexistence organizations, and civil society leaders for a day of panels, workshops, and public dialogue.
For Manuela Rotstein of Women Wage Peace, the summit was more than a conference. It was proof that, even in wartime, thousands of people are still willing to show up for a different vision of the future.
“Thursday was a very important day for all of us who want a better future,” Rotstein said. “Despite the extremely difficult reality we are living through, thousands of people came to the Summit to hear panels of academics, activists, and politicians proposing solutions to the conflict.”
Women Wage Peace, founded after the 2014 Gaza war, has grown into one of Israel’s largest grassroots peace movements. Its members — Jewish, Muslim, Christian, secular, religious, left‑wing, right‑wing — are united by a single demand: that political leaders pursue a negotiated agreement to end the conflict.
Rotstein said what struck her most at the summit was the determination she saw in the crowd, especially among younger participants.
“Throughout the day, I spoke with dozens of young people and people of all ages, and I found that most of them are looking for a way out, looking for a better world,” she said. “They refuse to resign themselves to living in a state of war.”
Her message is blunt: wars end, but only agreements create peace.
“Conflicts are ultimately resolved through an agreement between both sides,” she said. “That agreement may come after a long dispute or after a terrible war, as in our case, but it is always the agreement — the one that aims to change internal dynamics — that leads to peace.”
A fragile but persistent hope
The summit’s organizers called for an immediate ceasefire, the release of hostages, and a renewed commitment to a negotiated future. Speakers emphasized that grassroots cooperation remains possible even amid rising regional tensions, arguing that only sustained dialogue and shared security can prevent further escalation.
Their message stands in stark contrast to the political climate in Israel, where public debate is dominated by military strategy, national trauma, and fears of a widening regional war. Yet the activists insist that ignoring the political dimension of the conflict is itself a form of denial.
“People say now is not the time,” Rotstein said. “But if not now — when? When things get even worse?”
For many in these movements, the confrontation with Iran has only underscored the urgency of addressing the unresolved conflict next door. They argue that regional stability is impossible without a political solution for Israelis and Palestinians — and that civil society must lead where governments have failed.
Whether these voices can influence policy remains uncertain. But their presence — in the streets, at summits, in conversations across communities — challenges the narrative that Israelis and Palestinians are destined for endless war.
Oren, Rotstein, and others say they are not naïve. They know the obstacles are immense. But they also know that political change often begins long before it becomes visible.
“We are all humans living in the same place,” Oren said. “Whether it’s the same land or the same globe, we’re all here together.”
For now, their work continues — often unnoticed, often uphill — but grounded in a belief that the future is not yet written, and that ordinary people still have the power to shape it.
Hugo Balta is the executive editor of The Fulcrum and the publisher of the Latino News Network.
Coverage of this report was made possible in part with support from Fuente Latina.
Keep ReadingShow less

A deep dive into America’s healthcare cost crisis, comparing reform to a modern “moonshot.” Explores payment models, rising costs, and lessons from John F. Kennedy’s space race vision to drive systemic change.
IronHeart/Getty Images
The Moonshot America Needs to Solve Its Healthcare Crisis
May 06, 2026
In 1961, President John F. Kennedy told the nation, “We choose to go to the moon.” It’s often remembered as a moment of national ambition. In reality, the United States was locked in a Cold War with the Soviet Union, and the fear of falling behind in technological dominance made the mission unavoidable.
Today’s space race is driven by a different force. Governments and private companies are investing billions to capture economic advantages, from satellite infrastructure to advanced computing to the next frontier of resource extraction.
Moonshots don’t happen simply because leaders “choose” to pursue them. They often happen when fear or financial motivation grows so strong that inaction becomes riskier than action.
Those same motivations will determine whether the U.S. government or private companies choose to address healthcare’s growing affordability crisis. The challenge? Solving it will require the equivalent of a moonshot.
A crisis too big for small fixes
The scale of the cost crisis makes incremental fixes ineffective.
The United States spends roughly $15,000 per person each year, nearly double what peer nations spend. Employer-sponsored family coverage now averages $27,000 annually, with workers paying almost $7,000 out of pocket. Projected increases of 7% to 9% will increasingly constrain wages, benefits, and hiring.
Although many factors contribute to rising costs, they all share a common foundation: how medical care is reimbursed. Until the nation changes how doctors and hospitals are paid, costs will keep rising.
American healthcare runs on two models: fee-for-service, which rewards volume, and pay-for-value, which aims to reward outcomes. Neither is working as intended. Fixing either will require a system-wide change on a scale comparable to a moonshot.
A moonshot to fix fee-for-service
For 90% of working Americans, care is paid through fee-for-service.
Doctors and hospitals are reimbursed for each service: visits, tests, procedures, and prescriptions. The more care they deliver, the more revenue they receive—regardless of whether it improves patient outcomes.
This pay-for-volume approach works in many industries. But not in healthcare. That’s because providers of medical care (not patients) drive most clinical decisions, usually without price transparency. Plus, the incentives are perverse. Seeing a patient twice instead of once doubles revenue, and more complex procedures generate significantly more income, even when simpler alternatives are equally effective.
The only way a fee-for-system methodology can control costs is if there is either (a) robust competition or (b) strict price controls. Neither exists today.
Over the past two decades, consolidation has reduced competition across hospitals, physician groups, and drug purchasing, driving up costs. In parallel, pharmaceutical companies have used patent protections to launch ever-higher-priced drugs. Currently, the average annual list price for new medications is $370,000.
Although Congress can impose price limits and regulators can challenge monopolies, the political risk of those actions has long outweighed the cost of doing nothing. That will change only when growing unaffordability causes voters to replace elected officials who fail to implement solutions to lower medical costs.
The pay-for-value model’s moonshot
Pay-for-value was designed to fix fee-for-service’s core flaw: rewarding volume instead of superior medical outcomes.
In its simplest form, the model pays providers to keep patients healthy rather than to deliver more services. At its most advanced, it relies on capitation: a fixed payment to a group of doctors to manage care for a defined population.
In theory, this should reduce hospitalizations, improve chronic disease management, and lower costs. In practice, it has not.
In most cases, insurers (not providers) receive the capitated payment and, instead of passing those funds directly to clinicians, they continue to pay for care using fee-for-service. Thus, the same perverse incentives persist.
Research shows that clinicians don’t change how they practice until roughly 63% of their revenue comes from fully capitated payments. Below that threshold, fee-for-service incentives are more lucrative and dominate.
Making pay-for-value work requires structural change. Providers need to organize into groups large enough to manage care across large populations and share financial risk. That shift requires leadership, capital, and investment in data systems.
Those investments will not happen simply because the model is better. They will happen only when insurers and entrepreneurial companies view the financial rewards as too big to ignore.
What would launch a healthcare moonshot?
A healthcare moonshot, like a voyage into space, would involve accepting significant risk.
One possible motivation is fear. When the next recession begins (perhaps sooner than later, according to historical analyses), employers are likely to scale back coverage for the 160 million Americans who rely on job-based insurance. Impacted workers may vote out incumbents, creating fear in those who remain.
Another possibility is reward. In this scenario, advances in generative AI would improve the management of chronic disease, which affects 3 in 4 Americans. The CDC estimates that up to half of all heart attacks, strokes, and kidney failures could be prevented through effective chronic disease control, generating savings approaching $1 trillion. For insurers or new entrants, capturing even part of that opportunity would create a powerful incentive to act.
Moonshots don’t happen because they are the right thing to do. They happen when fear of loss or the promise of financial gain outweighs the risks involved. Healthcare has not yet reached that point, but medicine’s growing affordability crisis is bringing our nation closer than ever.
Robert Pearl, the author of “ChatGPT, MD,” teaches at both the Stanford University School of Medicine and the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.
Keep ReadingShow less

a large white building with columns with United States Supreme Court Building in the background
Photo by Fine Photographics on Unsplash
After the Court's Voting Rights Decision - How to Protect Black-Majority Districts
May 06, 2026
The Supreme Court recently ruled that Louisiana violated the Constitution in creating a new Black-majority voting district. This was after a Federal court had ruled that the previous map, by packing Blacks all in one district, diluted their votes, which violated the Voting Rights Act.
The question is what impact the decision in Louisiana v Callais will have on §2 of the Voting Rights Act ... and on the current gerrymander contest to gain safe seats in the House. The conservative majority said that the decision left the Act intact. The liberal minority, in a strong dissent by Justice Kagan, said that the practical impact was to "render §2 all but a dead letter," making it likely that existing Black-majority districts will not remain for long.
I agree with Justin Kagan's critique, but I believe there is a way forward.
The Court found that compliance with §2 of the Voting Rights Act does supply the compelling state interest to allow the use of race as a factor (in a positive way). But, the Court stated that:
1. The Voting Rights Act only guarantees minorities the same opportunity to elect members of their choice as others, which typically depends on where you live and the voting preferences of others in the area; there is a randomness to the process.
2. To allow a minority-majority district, the minority voters need to be in a compact area and sufficiently numerous so as to allow a reasonably configured district. Otherwise, as in the case before them, the district is not in compliance with the Act and race is clearly a discriminating factor..
Because of the facts of this case—a rambling, not compact district—the district was not in compliance with the Voting Rights Act and thus constituted an unconstitutional use of race as a discriminatory factor.
The decision on these facts makes sense. If Blacks are so scattered that a Black-majority district meeting these criteria cannot be drawn, then the Black vote is not being diluted by White legislators; Black's have diluted their vote by living apart from one another and in the midst of Whites. That's a natural result of integration. As is commonly said of court decisions: different facts, different decisions.
The decision could have and should have ended there. The troublesome parts of the decision—the necessity of finding an intent to discriminate and the analysis of partisan v racial motives—were totally unrelated to the facts before the Court. These matters would be critical in cases claiming state action to deprive Black voters of a majority district.
.The Court stated one would have to show a pattern from which the strong inference is that the state's intent was to use race as a factor, that it drew the districts to afford minority residents less opportunity "because of their race."
And in a final twist, the Court stated that if racial bloc voting could be explained by partisan affiliation—Blacks vote Democratic because they are Democrats, not because they are Black—then a map which diluted their votes would be permissible because the Court has held that partisan manipulation of districts is allowable. (Why, if these two intents—racial and partisan—are evident, the rule is to choose the less problematic one is another matter.)
Bottom line: The Court's imposition of both the need to find an intent to discriminate because of race and the need to show that diluting the Black vote was not the same as diluting the Democratic vote creates a Catch-22 that certainly could gut the Act,.
How do you get past this Catch-22? For example, if a large contiguous area of Black voters was carved up and combined with White areas, or if such an area exists but was not created a district, what would one do?
I would argue that both deprived Blacks of the "same opportunity" as others and provided evidence of the use of race as a factor. In either situation, if one makes the argument that Blacks are prevented from voting with other Blacks, although they live together in a tight, contiguous area with enough voters to have allowed a "reasonably configured" district, then they are being deprived of the same opportunity as other voters to elect representatives of their choosing.
By the Court's own reasoning, this would be a violation of the Act. The Court said that the Act does not require the creation of such a district; I would argue otherwise.
Further, one would argue that Blacks vote Democratic not because they are committed Democrats, but because they are Blacks and Democrats are the Party that fights for the interests of Blacks. Thus, it is their vote as Blacks, not as Democrats, that is seeking to be diluted. The analysis required by the Court might show that Blacks vote as a bloc for Democrats, but that does not mean that they vote as Democrats rather than as Blacks.
The Court's analysis is simplistic. For example, Blacks have a desire to elect Black representatives, not just Democrats. There is no question that the creation of Black-majority districts under the Voting Rights Act made the major expansion of Black officials, at the local, state, and federal levels, possible. And it is highly likely that Southern White Republicans have a particular desire to reduce Black Democratic elected officials, apart from their being Democrats. Thus, the Republican effort in the South to dilute the Black vote is motivated largely by racial concerns, not just partisan ones. But as I show next, this intentional discrimination requirement is invalid.
The 15th Amendment itself says nothing about motivation or intent. It simply states that people cannot be denied rights "on account of race." That is to say, if you're Black, you lose rights; the Amendment doesn't say anything about intent.
The requirement to show intent regarding voting rights was added by the Court in a 1980 case, Mobile v. Bolden. In response, Congress, in 1982, amended the Act and explicitly adopted a "results" test, rather than an "intent" test. In an earlier case, White (1973), which influenced Congress's amendment, there was no mention of discriminatory intent; rather, there was ample evidence that gave rise to an inference that the state had acted to "prevent the election of candidates preferred by minority voters."
The Court notes this action by Congress but then proceeds to ignore it. Nor does it apply "originalist" analysis to the question of the need for intent. And it does not distinguish the Court case in which the necessity of intent first appeared, Washington v Davis (1976), the facts of which were a facially nondiscriminatory hiring test.
So the key to moving beyond this decision is to argue that by breaking up a cohesive compact Black-majority district or by not creating one where the conditions exist, the state is depriving Blacks of the same opportunity to elect representatives (Black) of their choice, in violation of the Act. Plus, the effect is clearly racial discrimination, and though not required, there is cause to infer such an intent.
The decision in Louisiana v Callais should not be the final word on this matter.
Ronald L. Hirsch is a teacher, legal aid lawyer, survey researcher, nonprofit executive, consultant, composer, author, and volunteer. He is a graduate of Brown University and the University of Chicago Law School and the author of We Still Hold These Truths. Read more of his writing at www.PreservingAmericanValues.com
Keep ReadingShow less
Load More
















