Skip to content
Search

Latest Stories

Top Stories

The Battle To Regulate AI Discrimination

The Battle To Regulate AI Discrimination

A group of people analyzing ai data.

Getty Images, cofotoisme

As states race to regulate AI, they face significant challenges in crafting effective legislation that both protects consumers and allows for continued innovation in this rapidly evolving field.

What is Algorithmic Discrimination?

Often referred to as 'AI bias', it is the underlying prejudice in the data that's used to create AI algorithms which can ultimately result in discrimination - usually due to AI systems reflecting very human biases. These biases can creep in for a number of reasons. The data used to train the AI models may over- or under-represent certain groups. It can also be caused by a developer unfairly weighting factors in algorithmic decision-making based on their own conscious or unconscious biases.


Some high profile examples include COMPAS, a system designed to predict whether U.S. criminals are likely to reoffend. It consistently classed black defendants as significantly more likely to reoffend than white defendants, even when all other factors were equal. Then there's the example of a medical system predicting which patients needed extra medical care which again routinely underestimated black patients' needs. AI is increasingly being used to help filter candidates for jobs, and in one instance an English tutoring company found itself on the wrong end of a $356,000 settlement with the U.S. Equal Employment Opportunities Commission as its system excluded female applicants over 55 and male applicants over 60 regardless of qualifications and experience.

Sign up for The Fulcrum newsletter

Critics of the new legislation argue that such cases are overblown; that such errors are rare and easily detected and fixed, and in any case existing law is enough to regulate the industry as evidenced by the court cases that have been brought. But concerns are such that many states are now seeking to legislate to help manage the issues.

Colorado SB205: The Template for AI Regulation

Colorado's Artificial Intelligence Act, which was enacted in 2024 and takes effect on February 1, 2026, established the first comprehensive framework for regulating high-risk AI systems at the state level. The law adopts a risk-based approach, focusing on systems that make or substantially influence "consequential decisions" affecting consumers in areas such as employment, education, financial services, housing, and healthcare.

At its core, SB205 aims to prevent "algorithmic discrimination," which the law defines as "any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals" based on protected characteristics including age, race, ethnicity, religion, sex, and disability status.

The Colorado law requires developers of high-risk AI systems to use "reasonable care" to protect consumers from discrimination risks, document system limitations, provide transparency to deployers, and disclose known discrimination risks. Deployers, meanwhile, must implement risk management policies, conduct impact assessments, and provide consumers with notice when AI systems are used to make significant decisions about them.

The Implementation Challenge and Proposed Updates

Despite the law's well-intentioned framework, implementing SB205 has proven challenging. Governor Polis expressed concerns about its "complex compliance regime" when signing the bill and urged the legislature to simplify it before the 2026 effective date. To address these concerns, Colorado established an AI Impact Task Force to review and refine the legislation.

In February 2025, the Task Force issued a report outlining several key areas where the Act could be "clarified, refined, and otherwise improved." The Task Force, after multiple meetings with stakeholders, recommended several substantive changes to make implementation more feasible.

Central to these recommendations is revising critical definitions that have proven problematic, including "consequential decisions," "algorithmic discrimination," "substantial factor," and "intentional and substantial modification." The Task Force also suggested revamping the list of exemptions to what qualifies as a "covered decision system" and updating the scope of information that developers must provide to deployers.

The report also addresses practical compliance concerns, recommending changes to the triggering events and timing for impact assessments as well as updates to the requirements for deployer risk management programs. The Task Force is even considering whether to replace the current duty of care standard for developers and deployers with something potentially more or less stringent.

Small businesses may find relief in the Task Force's consideration of whether to expand the small business exemption, which currently applies only to businesses with fewer than 50 employees. The report also suggests providing businesses with a cure period for certain types of non-compliance before Attorney General enforcement actions begin.

These recommendations reflect the complexity of implementing such comprehensive AI legislation and the need for ongoing refinement as understanding of AI systems and their impacts evolves.

The 2025 Legislative Landscape

Despite these implementation hurdles, numerous states have introduced their own versions of algorithmic discrimination legislation in 2025. These bills share significant structural similarities with Colorado's SB205, often drawing inspiration from similar frameworks. The map below shows bills identified in current 2025 sessions that are directly related to AI bias. Click on a state to see the bills - they will update automatically as the bills progress through the state legislatures.


State-Specific Approaches and Variations

Most bills follow a framework similar to Colorado's SB205, focusing on "high-risk" AI systems that make or substantially influence "consequential decisions." The term "algorithmic discrimination" appears in 24 bills, "impact assessment" requirements in 21, and "risk management" provisions in 18, suggesting a developing consensus around core regulatory approaches.

Despite structural similarities, several states have introduced unique approaches. Illinois's SB2203 establishes the "Preventing Algorithmic Discrimination Act" and provides for a private right of action. New York's S01169 requires independent third-party audits every 18 months. Vermont's H0341 introduces the concept of "inherently dangerous artificial intelligence systems" and creates a new Division of Artificial Intelligence.

The bills also differ in their thresholds for compliance. While Colorado exempts businesses with fewer than 50 employees, Hawaii (SB59) and Illinois (SB2203)set the threshold at just 25 employees. Texas (HB1709) goes further by explicitly prohibiting certain AI uses, such as social scoring and manipulating human behavior.

Enforcement mechanisms vary significantly as well. While most grant exclusive authority to state attorneys general, the penalties range widely. Texas's HB1709imposes financial penalties up to $200,000 per violation. Only a few states, including Illinois, New York, and Vermont, provide for private lawsuits.

Beyond consumer protection, some states are pursuing legislation to promote AI innovation. Washington's HB1833 and HB1942 establish AI grant programs focused on economic development, while Oklahoma's HB1916 includes provisions for an AI Workforce Development Program.

A growing subset of bills focuses specifically on government AI use. Virginia's HB2046 and SB1214 establish regulations for AI systems used by public bodies, while Texas's SB1964 requires government agencies to inventory their AI systems and conduct regular risk assessments.

Health insurance features heavily, as you might expect. Bills such as Arkansas's HB1816 seek to restrict or heavily regulate AI usage in making decisions regarding insurance coverage.

And then there's Missouri. There's always Missouri. HB1462 Establishes the "AI Non-Sentience and Responsibility Act" which, among other things, prohibits AI from getting married. I'm sure Missourians feel safer already.

This diverse legislative landscape reveals states drawing inspiration from Colorado while introducing innovations to address their specific priorities, collectively shaping not only the regulatory environment for AI but also the national conversation about balancing innovation with protection against algorithmic discrimination.

The Brussels Effect?

According to Dean W. Ball's analysis in "The EU AI Act is Coming to America", similarities in state approaches suggest coordinated efforts to import regulatory approaches from abroad, potentially adding significant compliance costs without clear evidence of widespread harm. Ball compares Colorado's act and other proposes legislation to the EU AI Act and creates a convincing argument that legislators are copying EU regulations without considering the full implications for the U.S. context - what he calls the 'Brussels Effect'. Ball notes that after nearly a year of meetings, "Colorado does not know how to implement this law. Rather than exercising legislative restraint, however, other states are piling in" with similar legislation.

A critical consideration in evaluating these bills is their potential economic impact. Estimates from the Center for European Policy Studies looking at the EU AI Act suggest similar regulatory frameworks in the U.S. could add between 5-17% to corporate spending on AI development or deployment.

The key question becomes whether algorithmic discrimination is prevalent enough to justify these costs. Ball questions this, noting limited evidence of widespread algorithmic harm. He argues that "in pure economic terms, these algorithmic discrimination would be among the most significant pieces of technology policy passed in the US during my lifetime—even if compliance costs come in at the low end of these estimates."

A Path Forward: Balancing Protection and Innovation

Most of the legislation currently being considered by legislatures around the country will fail, as most proposed legislation does. But where bills do pass, states will face an uphill battle in finding ways to implement measures effectively. Under the current administration, they are unlikely to receive any support from the White House. During his first week in office, President Trump signed an executive order revoking Biden-era programs promoting the safe use of artificial intelligence. It also calls for an AI Action Plan to “enhance America’s position as an AI powerhouse and prevent unnecessarily burdensome requirements.” States will be in no doubt that if they want any additional guardrails, they will have to create them for themselves.

The challenge for policymakers is crafting legislation that effectively prevents algorithmic discrimination without imposing prohibitive compliance costs or stifling innovation. Based on the current legislative landscape and implementation challenges, several approaches could lead to more balanced regulation:

First, lawmakers should follow Colorado's Task Force lead in refining key definitions with greater precision. The ambiguity around "substantial factor" and "consequential decision" creates significant uncertainty for businesses. Clear, narrow definitions would help companies understand when and how they must comply with these laws.

Second, states should consider expanding exemptions for small businesses and low-risk applications. The Colorado Task Force has recognized this need, suggesting a reconsideration of the current 50-employee threshold for exemptions. A more nuanced framework could better balance protection and innovation.

Third, rather than mandating specific technical approaches, legislation should establish outcome-based standards. This would provide flexibility for companies to develop innovative compliance strategies while still achieving the desired result of preventing algorithmic discrimination.

Fourth, states should adopt the Task Force's recommendation to provide cure periods before enforcement actions. This would give companies time to address compliance issues without facing immediate penalties, fostering a collaborative rather than punitive regulatory environment.

Finally, states should coordinate their efforts to avoid creating a patchwork of incompatible regulations. The current proliferation of similar but slightly different state laws threatens to create compliance nightmares for businesses operating across state lines.

As Governor Polis noted when signing Colorado's SB205, there's a delicate balance to strike: protecting consumers while avoiding stifling business innovation. The Colorado Task Force's recommendations represent a thoughtful attempt to refine the law based on practical implementation concerns. Other states would be wise to observe these developments before rushing to pass their own versions.

The coming months will be critical as these bills work their way through state legislatures and as Colorado considers amending its pioneering law. Their outcomes will significantly shape the future of AI regulation in America, potentially creating a de facto national standard even in the absence of federal legislation. The challenge for policymakers is ensuring that these regulations effectively protect consumers while allowing American companies to remain at the forefront of AI innovation.


The Battle to Regulate AI Discrimination was originally published by Bill Track50 and shared with permission.

Stephen Rogers is a Data Wrangler at BillTrack50.

Read More

The Hidden Moral Cost of America’s Tariff Crisis

Small business owner attaching permanent close sign on the shop door.

Getty Images, Kannika Paison

The Hidden Moral Cost of America’s Tariff Crisis

In the spring of 2025, as American families struggle with unprecedented consumer costs, we find ourselves at a point of "moral reckoning." The latest data from the Yale Budget Lab reveals that tariff policies have driven consumer prices up by 2.9% in the short term. In comparison, the Penn Wharton Budget Model projects a staggering 6% reduction in long-term GDP and a 5% decline in wages. But these numbers, stark as they are, tell only part of the story.

The actual narrative is one of moral choice and democratic values. Eddie Glaude describes this way in his book “Democracy in Black”: Our economic policies must be viewed through the lens of ethical significance—not just market efficiency. When we examine the tariff regime's impact on American communities, we see economic data points and a fundamental challenge to our democratic principles of equity and justice.

Keep ReadingShow less
Donald Trump

President-elect Donald Trump at Madison Square Garden in New York

Chris Unger/Zuffa LLC/Getty Images

Trump’s First 100 Days Changed the Game – the Next 1300 Could Change the Nation

The country has now witnessed and felt the first 100 days of President Donald Trump’s second term. These days were filled with unrelenting, fast-paced executive action. He signed a record-breaking number of executive orders, though many have been challenged and may be reversed. Working with Congress to pass legislation, though more difficult, leads to more enduring change and is less likely to be challenged in court. While certainly eventful, the jury is still out on how effective these first days have been. More importantly, the period of greater consequence - the months following the first 100 days, which should focus on implementation - will ultimately determine whether the president’s drastic changes can stand the test of time and have their desired impact on American society.

The first months of all Presidential terms include outlining a vision and using presidential influence to shift priorities and change governance structures. The media often focuses on polling and popularity, comparing previous presidents and highlighting public perception of the president's handling of specific issues like the economy, immigration, and national defense. Rasmussen Reports' daily Presidential Tracking Poll now shows 50 percent of likely voters approve of President Trump's job performance, but change has never been popular, and he is unapologetically pursuing it in these first months.

Keep ReadingShow less
Trump’s Immigration Crackdown Spurs Brain Drain of International Talent

Close up of american visa label in passport.

Getty Images/Alexander W. Helin

Trump’s Immigration Crackdown Spurs Brain Drain of International Talent

This article is part of a short series examining the Trump administration’s impact on international students in American higher education. This is the second and final installment of the series, which is focused on F1 student visa-to-citizenship pipelines.

The first part of the series, entitled “Legal Battles Continue for International Students With Pro-Palestinian Views,” was about ongoing litigation against the Trump administration for ideological deportations in higher education.

Keep ReadingShow less
Congress Bill Spotlight: No Invading Allies Act

United States Capitol building in Washington, D.C.

Getty Images, dcsliminky

Congress Bill Spotlight: No Invading Allies Act

The Fulcrum introduces Congress Bill Spotlight, a weekly report by Jesse Rifkin, focusing on the noteworthy legislation of the thousands introduced in Congress. Rifkin has written about Congress for years, and now he's dissecting the most interesting bills you need to know about, but that often don't get the right news coverage.

In response to Trump’s takeover threats, Canadian coffee shops and cafés are rebranding the Americano beverage as the “Canadiano.”

Keep ReadingShow less