Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The Battle To Regulate AI Discrimination

News

The Battle To Regulate AI Discrimination

A group of people analyzing ai data.

Getty Images, cofotoisme

As states race to regulate AI, they face significant challenges in crafting effective legislation that both protects consumers and allows for continued innovation in this rapidly evolving field.

What is Algorithmic Discrimination?

Often referred to as 'AI bias', it is the underlying prejudice in the data that's used to create AI algorithms which can ultimately result in discrimination - usually due to AI systems reflecting very human biases. These biases can creep in for a number of reasons. The data used to train the AI models may over- or under-represent certain groups. It can also be caused by a developer unfairly weighting factors in algorithmic decision-making based on their own conscious or unconscious biases.


Some high profile examples include COMPAS, a system designed to predict whether U.S. criminals are likely to reoffend. It consistently classed black defendants as significantly more likely to reoffend than white defendants, even when all other factors were equal. Then there's the example of a medical system predicting which patients needed extra medical care which again routinely underestimated black patients' needs. AI is increasingly being used to help filter candidates for jobs, and in one instance an English tutoring company found itself on the wrong end of a $356,000 settlement with the U.S. Equal Employment Opportunities Commission as its system excluded female applicants over 55 and male applicants over 60 regardless of qualifications and experience.

Critics of the new legislation argue that such cases are overblown; that such errors are rare and easily detected and fixed, and in any case existing law is enough to regulate the industry as evidenced by the court cases that have been brought. But concerns are such that many states are now seeking to legislate to help manage the issues.

Colorado SB205: The Template for AI Regulation

Colorado's Artificial Intelligence Act, which was enacted in 2024 and takes effect on February 1, 2026, established the first comprehensive framework for regulating high-risk AI systems at the state level. The law adopts a risk-based approach, focusing on systems that make or substantially influence "consequential decisions" affecting consumers in areas such as employment, education, financial services, housing, and healthcare.

At its core, SB205 aims to prevent "algorithmic discrimination," which the law defines as "any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals" based on protected characteristics including age, race, ethnicity, religion, sex, and disability status.

The Colorado law requires developers of high-risk AI systems to use "reasonable care" to protect consumers from discrimination risks, document system limitations, provide transparency to deployers, and disclose known discrimination risks. Deployers, meanwhile, must implement risk management policies, conduct impact assessments, and provide consumers with notice when AI systems are used to make significant decisions about them.

The Implementation Challenge and Proposed Updates

Despite the law's well-intentioned framework, implementing SB205 has proven challenging. Governor Polis expressed concerns about its "complex compliance regime" when signing the bill and urged the legislature to simplify it before the 2026 effective date. To address these concerns, Colorado established an AI Impact Task Force to review and refine the legislation.

In February 2025, the Task Force issued a report outlining several key areas where the Act could be "clarified, refined, and otherwise improved." The Task Force, after multiple meetings with stakeholders, recommended several substantive changes to make implementation more feasible.

Central to these recommendations is revising critical definitions that have proven problematic, including "consequential decisions," "algorithmic discrimination," "substantial factor," and "intentional and substantial modification." The Task Force also suggested revamping the list of exemptions to what qualifies as a "covered decision system" and updating the scope of information that developers must provide to deployers.

The report also addresses practical compliance concerns, recommending changes to the triggering events and timing for impact assessments as well as updates to the requirements for deployer risk management programs. The Task Force is even considering whether to replace the current duty of care standard for developers and deployers with something potentially more or less stringent.

Small businesses may find relief in the Task Force's consideration of whether to expand the small business exemption, which currently applies only to businesses with fewer than 50 employees. The report also suggests providing businesses with a cure period for certain types of non-compliance before Attorney General enforcement actions begin.

These recommendations reflect the complexity of implementing such comprehensive AI legislation and the need for ongoing refinement as understanding of AI systems and their impacts evolves.

The 2025 Legislative Landscape

Despite these implementation hurdles, numerous states have introduced their own versions of algorithmic discrimination legislation in 2025. These bills share significant structural similarities with Colorado's SB205, often drawing inspiration from similar frameworks. The map below shows bills identified in current 2025 sessions that are directly related to AI bias. Click on a state to see the bills - they will update automatically as the bills progress through the state legislatures.


State-Specific Approaches and Variations

Most bills follow a framework similar to Colorado's SB205, focusing on "high-risk" AI systems that make or substantially influence "consequential decisions." The term "algorithmic discrimination" appears in 24 bills, "impact assessment" requirements in 21, and "risk management" provisions in 18, suggesting a developing consensus around core regulatory approaches.

Despite structural similarities, several states have introduced unique approaches. Illinois's SB2203 establishes the "Preventing Algorithmic Discrimination Act" and provides for a private right of action. New York's S01169 requires independent third-party audits every 18 months. Vermont's H0341 introduces the concept of "inherently dangerous artificial intelligence systems" and creates a new Division of Artificial Intelligence.

The bills also differ in their thresholds for compliance. While Colorado exempts businesses with fewer than 50 employees, Hawaii (SB59) and Illinois (SB2203) set the threshold at just 25 employees. Texas (HB1709) goes further by explicitly prohibiting certain AI uses, such as social scoring and manipulating human behavior.

Enforcement mechanisms vary significantly as well. While most grant exclusive authority to state attorneys general, the penalties range widely. Texas's HB1709 imposes financial penalties up to $200,000 per violation. Only a few states, including Illinois, New York, and Vermont, provide for private lawsuits.

Beyond consumer protection, some states are pursuing legislation to promote AI innovation. Washington's HB1833 and HB1942 establish AI grant programs focused on economic development, while Oklahoma's HB1916 includes provisions for an AI Workforce Development Program.

A growing subset of bills focuses specifically on government AI use. Virginia's HB2046 and SB1214 establish regulations for AI systems used by public bodies, while Texas's SB1964 requires government agencies to inventory their AI systems and conduct regular risk assessments.

Health insurance features heavily, as you might expect. Bills such as Arkansas's HB1816 seek to restrict or heavily regulate AI usage in making decisions regarding insurance coverage.

And then there's Missouri. There's always Missouri. HB1462 Establishes the "AI Non-Sentience and Responsibility Act" which, among other things, prohibits AI from getting married. I'm sure Missourians feel safer already.

This diverse legislative landscape reveals states drawing inspiration from Colorado while introducing innovations to address their specific priorities, collectively shaping not only the regulatory environment for AI but also the national conversation about balancing innovation with protection against algorithmic discrimination.

The Brussels Effect?

According to Dean W. Ball's analysis in "The EU AI Act is Coming to America", similarities in state approaches suggest coordinated efforts to import regulatory approaches from abroad, potentially adding significant compliance costs without clear evidence of widespread harm. Ball compares Colorado's act and other proposes legislation to the EU AI Act and creates a convincing argument that legislators are copying EU regulations without considering the full implications for the U.S. context - what he calls the 'Brussels Effect'. Ball notes that after nearly a year of meetings, "Colorado does not know how to implement this law. Rather than exercising legislative restraint, however, other states are piling in" with similar legislation.

A critical consideration in evaluating these bills is their potential economic impact. Estimates from the Center for European Policy Studies looking at the EU AI Act suggest similar regulatory frameworks in the U.S. could add between 5-17% to corporate spending on AI development or deployment.

The key question becomes whether algorithmic discrimination is prevalent enough to justify these costs. Ball questions this, noting limited evidence of widespread algorithmic harm. He argues that "in pure economic terms, these algorithmic discrimination would be among the most significant pieces of technology policy passed in the US during my lifetime—even if compliance costs come in at the low end of these estimates."

A Path Forward: Balancing Protection and Innovation

Most of the legislation currently being considered by legislatures around the country will fail, as most proposed legislation does. But where bills do pass, states will face an uphill battle in finding ways to implement measures effectively. Under the current administration, they are unlikely to receive any support from the White House. During his first week in office, President Trump signed an executive order revoking Biden-era programs promoting the safe use of artificial intelligence. It also calls for an AI Action Plan to “enhance America’s position as an AI powerhouse and prevent unnecessarily burdensome requirements.” States will be in no doubt that if they want any additional guardrails, they will have to create them for themselves.

The challenge for policymakers is crafting legislation that effectively prevents algorithmic discrimination without imposing prohibitive compliance costs or stifling innovation. Based on the current legislative landscape and implementation challenges, several approaches could lead to more balanced regulation:

First, lawmakers should follow Colorado's Task Force lead in refining key definitions with greater precision. The ambiguity around "substantial factor" and "consequential decision" creates significant uncertainty for businesses. Clear, narrow definitions would help companies understand when and how they must comply with these laws.

Second, states should consider expanding exemptions for small businesses and low-risk applications. The Colorado Task Force has recognized this need, suggesting a reconsideration of the current 50-employee threshold for exemptions. A more nuanced framework could better balance protection and innovation.

Third, rather than mandating specific technical approaches, legislation should establish outcome-based standards. This would provide flexibility for companies to develop innovative compliance strategies while still achieving the desired result of preventing algorithmic discrimination.

Fourth, states should adopt the Task Force's recommendation to provide cure periods before enforcement actions. This would give companies time to address compliance issues without facing immediate penalties, fostering a collaborative rather than punitive regulatory environment.

Finally, states should coordinate their efforts to avoid creating a patchwork of incompatible regulations. The current proliferation of similar but slightly different state laws threatens to create compliance nightmares for businesses operating across state lines.

As Governor Polis noted when signing Colorado's SB205, there's a delicate balance to strike: protecting consumers while avoiding stifling business innovation. The Colorado Task Force's recommendations represent a thoughtful attempt to refine the law based on practical implementation concerns. Other states would be wise to observe these developments before rushing to pass their own versions.

The coming months will be critical as these bills work their way through state legislatures and as Colorado considers amending its pioneering law. Their outcomes will significantly shape the future of AI regulation in America, potentially creating a de facto national standard even in the absence of federal legislation. The challenge for policymakers is ensuring that these regulations effectively protect consumers while allowing American companies to remain at the forefront of AI innovation.


The Battle to Regulate AI Discrimination was originally published by Bill Track50 and shared with permission.

Stephen Rogers is a Data Wrangler at BillTrack50.


Read More

Silver sign of Department of Justice on a classical concrete wall with plants as foreground.
Silver sign of Department of Justice on a classical concrete wall with plants as foreground.
Getty Images, Dragon Claws

The Ku Klux Klan Returns to Power

Last month, the Department of Justice initiated a baseless lawsuit against the Southern Poverty Law Center (SPLC). This retributive action, like the previous frivolous actions brought against other individuals and organizations who defend the rule of law and judicial administration, is not only meritless, but is primarily intended to harass, intimidate, and render dysfunctional an organization that is interfering with the administration’s goal of fomenting hate and perpetuating its ethnic cleansing agenda of America.

Letitia James, James Comey, Mark Kelly, Jerome Powell, Minnesota Democrats, protesters at Cities Church in St. Paul, Minnesota, former military intelligence community lawmakers, John Bolton, Adam Schiff, John Brennan, Congressional Representative Lamonica McIver, Newark, New Jersey Mayor Ras Baraka, and fifteen law firms have been previous targets of such fabricated claims. The Department of Justice (DOJ), which has posted the worst success rate in the country's history, has been plagued by significant corruption and politicization, undermining its independence and integrity. It has shut down departments previously focused on enforcing the civil rights laws, national security, corruption, ethics, money laundering, and terrorism in order to focus on deportations of non-criminals, dismantling civil rights, and harassing the administration’s enemies. There have been forced resignations of prosecutors who resisted political pressure, indicating a shift towards loyalty over legal judgment. Disciplinary actions against judges and prosecutors who criticize the executive have become commonplace. Attacks on judges, even those appointed by the president, who follow the law rather than the president’s illegal policies, are routine. The DOJ's internal oversight and ethics capacity have been weakened, raising concerns about the rule of law and the Department’s abuse of justice.

Keep ReadingShow less
House Democrats and Republicans Clash over Free Speech in Higher Education

Rep. Burgess Owens, R-Utah, addresses the chamber in front of a portrait of George Miller.

(Matthew Junkroski / MEDILL)

House Democrats and Republicans Clash over Free Speech in Higher Education

WASHINGTON — Witnesses and representatives sat in silence as Rep. Burgess Owens, R-Utah, spoke about how universities should strive for intellectual diversity and introduce controversial ideas. Rep. Alma S. Adams, D-N.C., agreed with his rhetoric, but went on to criticize her Republican colleagues for standing in the way of free expression.

“Unfortunately, what we often see, especially in hearings like this, is not a good faith effort to strike that balance, but a selective narrative,” Adams said. “My colleagues on the other side of the aisle frequently claim that there’s a free speech crisis on college campuses, arguing that universities lack viewpoint diversity and silence certain perspectives.”

Keep ReadingShow less
Republican Attacks on Citizen Ballot Measures Undermine Democracy

Election workers process ballots at the Orange County Registrar of Voters one week after Election Day on November 12, 2024 in Santa Ana, California.

Getty Images, Mario Tama

Republican Attacks on Citizen Ballot Measures Undermine Democracy

In October 2020, Utah’s Republican Senator Mike Lee delivered a startling but revealing civics lesson in the aftermath of that year’s vice-presidential debate between Kamala Harris and Mike Pence. He tweeted, The United States is “not a democracy.”

“The word ‘democracy,’’’ Lee wrote, “appears nowhere in the Constitution, perhaps because our form of government is not a democracy. It’s a constitutional republic….Democracy isn’t the objective….” The senator said that the object of the Constitution was to promote “liberty, peace, and prospefity (sic).”

Keep ReadingShow less
Key Senate panel advances Trump’s pick for Fed chair

Kevin Warsh testified in a Senate Banking Committee confirmation hearing for Fed chair last week.

Photo provided

Key Senate panel advances Trump’s pick for Fed chair

WASHINGTON – The Senate Banking Committee on Wednesday voted 13 to 11 to advance Kevin Warsh’s nomination as Federal Reserve chairman despite Democrats’ concerns that he would not be independent from President Donald Trump.

The banking committee’s vote fell along party lines, with all 13 Republicans voting in favor of the nomination and all 11 Democrats voting against it. Senator Elizabeth Warren, D-Mass., said in a press release that it was the first time a vote on a Fed chair nominee was entirely partisan.

Keep ReadingShow less