Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The Battle To Regulate AI Discrimination

News

The Battle To Regulate AI Discrimination

A group of people analyzing ai data.

Getty Images, cofotoisme

As states race to regulate AI, they face significant challenges in crafting effective legislation that both protects consumers and allows for continued innovation in this rapidly evolving field.

What is Algorithmic Discrimination?

Often referred to as 'AI bias', it is the underlying prejudice in the data that's used to create AI algorithms which can ultimately result in discrimination - usually due to AI systems reflecting very human biases. These biases can creep in for a number of reasons. The data used to train the AI models may over- or under-represent certain groups. It can also be caused by a developer unfairly weighting factors in algorithmic decision-making based on their own conscious or unconscious biases.


Some high profile examples include COMPAS, a system designed to predict whether U.S. criminals are likely to reoffend. It consistently classed black defendants as significantly more likely to reoffend than white defendants, even when all other factors were equal. Then there's the example of a medical system predicting which patients needed extra medical care which again routinely underestimated black patients' needs. AI is increasingly being used to help filter candidates for jobs, and in one instance an English tutoring company found itself on the wrong end of a $356,000 settlement with the U.S. Equal Employment Opportunities Commission as its system excluded female applicants over 55 and male applicants over 60 regardless of qualifications and experience.

Critics of the new legislation argue that such cases are overblown; that such errors are rare and easily detected and fixed, and in any case existing law is enough to regulate the industry as evidenced by the court cases that have been brought. But concerns are such that many states are now seeking to legislate to help manage the issues.

Colorado SB205: The Template for AI Regulation

Colorado's Artificial Intelligence Act, which was enacted in 2024 and takes effect on February 1, 2026, established the first comprehensive framework for regulating high-risk AI systems at the state level. The law adopts a risk-based approach, focusing on systems that make or substantially influence "consequential decisions" affecting consumers in areas such as employment, education, financial services, housing, and healthcare.

At its core, SB205 aims to prevent "algorithmic discrimination," which the law defines as "any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals" based on protected characteristics including age, race, ethnicity, religion, sex, and disability status.

The Colorado law requires developers of high-risk AI systems to use "reasonable care" to protect consumers from discrimination risks, document system limitations, provide transparency to deployers, and disclose known discrimination risks. Deployers, meanwhile, must implement risk management policies, conduct impact assessments, and provide consumers with notice when AI systems are used to make significant decisions about them.

The Implementation Challenge and Proposed Updates

Despite the law's well-intentioned framework, implementing SB205 has proven challenging. Governor Polis expressed concerns about its "complex compliance regime" when signing the bill and urged the legislature to simplify it before the 2026 effective date. To address these concerns, Colorado established an AI Impact Task Force to review and refine the legislation.

In February 2025, the Task Force issued a report outlining several key areas where the Act could be "clarified, refined, and otherwise improved." The Task Force, after multiple meetings with stakeholders, recommended several substantive changes to make implementation more feasible.

Central to these recommendations is revising critical definitions that have proven problematic, including "consequential decisions," "algorithmic discrimination," "substantial factor," and "intentional and substantial modification." The Task Force also suggested revamping the list of exemptions to what qualifies as a "covered decision system" and updating the scope of information that developers must provide to deployers.

The report also addresses practical compliance concerns, recommending changes to the triggering events and timing for impact assessments as well as updates to the requirements for deployer risk management programs. The Task Force is even considering whether to replace the current duty of care standard for developers and deployers with something potentially more or less stringent.

Small businesses may find relief in the Task Force's consideration of whether to expand the small business exemption, which currently applies only to businesses with fewer than 50 employees. The report also suggests providing businesses with a cure period for certain types of non-compliance before Attorney General enforcement actions begin.

These recommendations reflect the complexity of implementing such comprehensive AI legislation and the need for ongoing refinement as understanding of AI systems and their impacts evolves.

The 2025 Legislative Landscape

Despite these implementation hurdles, numerous states have introduced their own versions of algorithmic discrimination legislation in 2025. These bills share significant structural similarities with Colorado's SB205, often drawing inspiration from similar frameworks. The map below shows bills identified in current 2025 sessions that are directly related to AI bias. Click on a state to see the bills - they will update automatically as the bills progress through the state legislatures.


State-Specific Approaches and Variations

Most bills follow a framework similar to Colorado's SB205, focusing on "high-risk" AI systems that make or substantially influence "consequential decisions." The term "algorithmic discrimination" appears in 24 bills, "impact assessment" requirements in 21, and "risk management" provisions in 18, suggesting a developing consensus around core regulatory approaches.

Despite structural similarities, several states have introduced unique approaches. Illinois's SB2203 establishes the "Preventing Algorithmic Discrimination Act" and provides for a private right of action. New York's S01169 requires independent third-party audits every 18 months. Vermont's H0341 introduces the concept of "inherently dangerous artificial intelligence systems" and creates a new Division of Artificial Intelligence.

The bills also differ in their thresholds for compliance. While Colorado exempts businesses with fewer than 50 employees, Hawaii (SB59) and Illinois (SB2203) set the threshold at just 25 employees. Texas (HB1709) goes further by explicitly prohibiting certain AI uses, such as social scoring and manipulating human behavior.

Enforcement mechanisms vary significantly as well. While most grant exclusive authority to state attorneys general, the penalties range widely. Texas's HB1709 imposes financial penalties up to $200,000 per violation. Only a few states, including Illinois, New York, and Vermont, provide for private lawsuits.

Beyond consumer protection, some states are pursuing legislation to promote AI innovation. Washington's HB1833 and HB1942 establish AI grant programs focused on economic development, while Oklahoma's HB1916 includes provisions for an AI Workforce Development Program.

A growing subset of bills focuses specifically on government AI use. Virginia's HB2046 and SB1214 establish regulations for AI systems used by public bodies, while Texas's SB1964 requires government agencies to inventory their AI systems and conduct regular risk assessments.

Health insurance features heavily, as you might expect. Bills such as Arkansas's HB1816 seek to restrict or heavily regulate AI usage in making decisions regarding insurance coverage.

And then there's Missouri. There's always Missouri. HB1462 Establishes the "AI Non-Sentience and Responsibility Act" which, among other things, prohibits AI from getting married. I'm sure Missourians feel safer already.

This diverse legislative landscape reveals states drawing inspiration from Colorado while introducing innovations to address their specific priorities, collectively shaping not only the regulatory environment for AI but also the national conversation about balancing innovation with protection against algorithmic discrimination.

The Brussels Effect?

According to Dean W. Ball's analysis in "The EU AI Act is Coming to America", similarities in state approaches suggest coordinated efforts to import regulatory approaches from abroad, potentially adding significant compliance costs without clear evidence of widespread harm. Ball compares Colorado's act and other proposes legislation to the EU AI Act and creates a convincing argument that legislators are copying EU regulations without considering the full implications for the U.S. context - what he calls the 'Brussels Effect'. Ball notes that after nearly a year of meetings, "Colorado does not know how to implement this law. Rather than exercising legislative restraint, however, other states are piling in" with similar legislation.

A critical consideration in evaluating these bills is their potential economic impact. Estimates from the Center for European Policy Studies looking at the EU AI Act suggest similar regulatory frameworks in the U.S. could add between 5-17% to corporate spending on AI development or deployment.

The key question becomes whether algorithmic discrimination is prevalent enough to justify these costs. Ball questions this, noting limited evidence of widespread algorithmic harm. He argues that "in pure economic terms, these algorithmic discrimination would be among the most significant pieces of technology policy passed in the US during my lifetime—even if compliance costs come in at the low end of these estimates."

A Path Forward: Balancing Protection and Innovation

Most of the legislation currently being considered by legislatures around the country will fail, as most proposed legislation does. But where bills do pass, states will face an uphill battle in finding ways to implement measures effectively. Under the current administration, they are unlikely to receive any support from the White House. During his first week in office, President Trump signed an executive order revoking Biden-era programs promoting the safe use of artificial intelligence. It also calls for an AI Action Plan to “enhance America’s position as an AI powerhouse and prevent unnecessarily burdensome requirements.” States will be in no doubt that if they want any additional guardrails, they will have to create them for themselves.

The challenge for policymakers is crafting legislation that effectively prevents algorithmic discrimination without imposing prohibitive compliance costs or stifling innovation. Based on the current legislative landscape and implementation challenges, several approaches could lead to more balanced regulation:

First, lawmakers should follow Colorado's Task Force lead in refining key definitions with greater precision. The ambiguity around "substantial factor" and "consequential decision" creates significant uncertainty for businesses. Clear, narrow definitions would help companies understand when and how they must comply with these laws.

Second, states should consider expanding exemptions for small businesses and low-risk applications. The Colorado Task Force has recognized this need, suggesting a reconsideration of the current 50-employee threshold for exemptions. A more nuanced framework could better balance protection and innovation.

Third, rather than mandating specific technical approaches, legislation should establish outcome-based standards. This would provide flexibility for companies to develop innovative compliance strategies while still achieving the desired result of preventing algorithmic discrimination.

Fourth, states should adopt the Task Force's recommendation to provide cure periods before enforcement actions. This would give companies time to address compliance issues without facing immediate penalties, fostering a collaborative rather than punitive regulatory environment.

Finally, states should coordinate their efforts to avoid creating a patchwork of incompatible regulations. The current proliferation of similar but slightly different state laws threatens to create compliance nightmares for businesses operating across state lines.

As Governor Polis noted when signing Colorado's SB205, there's a delicate balance to strike: protecting consumers while avoiding stifling business innovation. The Colorado Task Force's recommendations represent a thoughtful attempt to refine the law based on practical implementation concerns. Other states would be wise to observe these developments before rushing to pass their own versions.

The coming months will be critical as these bills work their way through state legislatures and as Colorado considers amending its pioneering law. Their outcomes will significantly shape the future of AI regulation in America, potentially creating a de facto national standard even in the absence of federal legislation. The challenge for policymakers is ensuring that these regulations effectively protect consumers while allowing American companies to remain at the forefront of AI innovation.


The Battle to Regulate AI Discrimination was originally published by Bill Track50 and shared with permission.

Stephen Rogers is a Data Wrangler at BillTrack50.


Read More

Silence, Signals, and the Unfinished Story of the Abandoned Disability Rule

Waiting for the Door to Open: Advocates and older workers are left in limbo as the administration’s decision to abandon a harsh disability rule exists only in private assurances, not public record.

AI-created animation

Silence, Signals, and the Unfinished Story of the Abandoned Disability Rule

We reported in the Fulcrum on November 30th that in early November, disability advocates walked out of the West Wing, believing they had secured a rare reversal from the Trump administration of an order that stripped disability benefits from more than 800,000 older manual laborers.

The public record has remained conspicuously quiet on the matter. No press release, no Federal Register notice, no formal statement from the White House or the Social Security Administration has confirmed what senior officials told Jason Turkish and his colleagues behind closed doors in November: that the administration would not move forward with a regulation that could have stripped disability benefits from more than 800,000 older manual laborers. According to a memo shared by an agency official and verified by multiple sources with knowledge of the discussions, an internal meeting in early November involved key SSA decision-makers outlining the administration's intent to halt the proposal. This memo, though not publicly released, is said to detail the political and social ramifications of proceeding with the regulation, highlighting its unpopularity among constituents who would be affected by the changes.

Keep ReadingShow less
How Trump turned a January 6 death into the politics of ‘protecting women’

A memorial for Ashli Babbitt sits near the US Capitol during a Day of Remembrance and Action on the one year anniversary of the January 6, 2021 insurrection.

(John Lamparski/NurPhoto/AP)

How Trump turned a January 6 death into the politics of ‘protecting women’

In the wake of the insurrection at the Capitol on January 6, 2021, President Donald Trump quickly took up the cause of a 35-year-old veteran named Ashli Babbitt.

“Who killed Ashli Babbitt?” he asked in a one-sentence statement on July 1, 2021.

Keep ReadingShow less
Gerrymandering Test the Boundaries of Fair Representation in 2026

Supreme Court, Allen v. Milligan Illegal Congressional Voting Map

Gerrymandering Test the Boundaries of Fair Representation in 2026

A wave of redistricting battles in early 2026 is reshaping the political map ahead of the midterm elections and intensifying long‑running fights over gerrymandering and democratic representation.

In California, a three‑judge federal panel on January 15 upheld the state’s new congressional districts created under Proposition 50, ruling 2–1 that the map—expected to strengthen Democratic advantages in several competitive seats—could be used in the 2026 elections. The following day, a separate federal court dismissed a Republican lawsuit arguing that the maps were unconstitutional, clearing the way for the state’s redistricting overhaul to stand. In Virginia, Democratic lawmakers have advanced a constitutional amendment that would allow mid‑decade redistricting, a move they describe as a response to aggressive Republican map‑drawing in other states; some legislators have openly discussed the possibility of a congressional map that could yield 10 Democratic‑leaning seats out of 11. In Missouri, the secretary of state has acknowledged in court that ballot language for a referendum on the state’s congressional map could mislead voters, a key development in ongoing litigation over the fairness of the state’s redistricting process. And in Utah, a state judge has ordered a new congressional map that includes one Democratic‑leaning district after years of litigation over the legislature’s earlier plan, prompting strong objections from Republican lawmakers who argue the court exceeded its authority.

Keep ReadingShow less
New Year’s Resolutions for Congress – and the Country

Speaker of the House Mike Johnson (R-LA) (L) and Rep. August Pfluger (R-TX) lead a group of fellow Republicans through Statuary Hall on the way to a news conference on the 28th day of the federal government shutdown at the U.S. Capitol on October 28, 2025 in Washington, DC.

Getty Images, Chip Somodevilla

New Year’s Resolutions for Congress – and the Country

Every January 1st, many Americans face their failings and resolve to do better by making New Year’s Resolutions. Wouldn’t it be delightful if Congress would do the same? According to Gallup, half of all Americans currently have very little confidence in Congress. And while confidence in our government institutions is shrinking across the board, Congress is near rock bottom. With that in mind, here is a list of resolutions Congress could make and keep, which would help to rebuild public trust in Congress and our government institutions. Let’s start with:

1 – Working for the American people. We elect our senators and representatives to work on our behalf – not on their behalf or on behalf of the wealthiest donors, but on our behalf. There are many issues on which a large majority of Americans agree but Congress can’t. Congress should resolve to address those issues.

Keep ReadingShow less