Skip to content
Search

Latest Stories

Top Stories

The Battle To Regulate AI Discrimination

The Battle To Regulate AI Discrimination

A group of people analyzing ai data.

Getty Images, cofotoisme

As states race to regulate AI, they face significant challenges in crafting effective legislation that both protects consumers and allows for continued innovation in this rapidly evolving field.

What is Algorithmic Discrimination?

Often referred to as 'AI bias', it is the underlying prejudice in the data that's used to create AI algorithms which can ultimately result in discrimination - usually due to AI systems reflecting very human biases. These biases can creep in for a number of reasons. The data used to train the AI models may over- or under-represent certain groups. It can also be caused by a developer unfairly weighting factors in algorithmic decision-making based on their own conscious or unconscious biases.


Some high profile examples include COMPAS, a system designed to predict whether U.S. criminals are likely to reoffend. It consistently classed black defendants as significantly more likely to reoffend than white defendants, even when all other factors were equal. Then there's the example of a medical system predicting which patients needed extra medical care which again routinely underestimated black patients' needs. AI is increasingly being used to help filter candidates for jobs, and in one instance an English tutoring company found itself on the wrong end of a $356,000 settlement with the U.S. Equal Employment Opportunities Commission as its system excluded female applicants over 55 and male applicants over 60 regardless of qualifications and experience.

Critics of the new legislation argue that such cases are overblown; that such errors are rare and easily detected and fixed, and in any case existing law is enough to regulate the industry as evidenced by the court cases that have been brought. But concerns are such that many states are now seeking to legislate to help manage the issues.

Colorado SB205: The Template for AI Regulation

Colorado's Artificial Intelligence Act, which was enacted in 2024 and takes effect on February 1, 2026, established the first comprehensive framework for regulating high-risk AI systems at the state level. The law adopts a risk-based approach, focusing on systems that make or substantially influence "consequential decisions" affecting consumers in areas such as employment, education, financial services, housing, and healthcare.

At its core, SB205 aims to prevent "algorithmic discrimination," which the law defines as "any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals" based on protected characteristics including age, race, ethnicity, religion, sex, and disability status.

The Colorado law requires developers of high-risk AI systems to use "reasonable care" to protect consumers from discrimination risks, document system limitations, provide transparency to deployers, and disclose known discrimination risks. Deployers, meanwhile, must implement risk management policies, conduct impact assessments, and provide consumers with notice when AI systems are used to make significant decisions about them.

The Implementation Challenge and Proposed Updates

Despite the law's well-intentioned framework, implementing SB205 has proven challenging. Governor Polis expressed concerns about its "complex compliance regime" when signing the bill and urged the legislature to simplify it before the 2026 effective date. To address these concerns, Colorado established an AI Impact Task Force to review and refine the legislation.

In February 2025, the Task Force issued a report outlining several key areas where the Act could be "clarified, refined, and otherwise improved." The Task Force, after multiple meetings with stakeholders, recommended several substantive changes to make implementation more feasible.

Central to these recommendations is revising critical definitions that have proven problematic, including "consequential decisions," "algorithmic discrimination," "substantial factor," and "intentional and substantial modification." The Task Force also suggested revamping the list of exemptions to what qualifies as a "covered decision system" and updating the scope of information that developers must provide to deployers.

The report also addresses practical compliance concerns, recommending changes to the triggering events and timing for impact assessments as well as updates to the requirements for deployer risk management programs. The Task Force is even considering whether to replace the current duty of care standard for developers and deployers with something potentially more or less stringent.

Small businesses may find relief in the Task Force's consideration of whether to expand the small business exemption, which currently applies only to businesses with fewer than 50 employees. The report also suggests providing businesses with a cure period for certain types of non-compliance before Attorney General enforcement actions begin.

These recommendations reflect the complexity of implementing such comprehensive AI legislation and the need for ongoing refinement as understanding of AI systems and their impacts evolves.

The 2025 Legislative Landscape

Despite these implementation hurdles, numerous states have introduced their own versions of algorithmic discrimination legislation in 2025. These bills share significant structural similarities with Colorado's SB205, often drawing inspiration from similar frameworks. The map below shows bills identified in current 2025 sessions that are directly related to AI bias. Click on a state to see the bills - they will update automatically as the bills progress through the state legislatures.


State-Specific Approaches and Variations

Most bills follow a framework similar to Colorado's SB205, focusing on "high-risk" AI systems that make or substantially influence "consequential decisions." The term "algorithmic discrimination" appears in 24 bills, "impact assessment" requirements in 21, and "risk management" provisions in 18, suggesting a developing consensus around core regulatory approaches.

Despite structural similarities, several states have introduced unique approaches. Illinois's SB2203 establishes the "Preventing Algorithmic Discrimination Act" and provides for a private right of action. New York's S01169 requires independent third-party audits every 18 months. Vermont's H0341 introduces the concept of "inherently dangerous artificial intelligence systems" and creates a new Division of Artificial Intelligence.

The bills also differ in their thresholds for compliance. While Colorado exempts businesses with fewer than 50 employees, Hawaii (SB59) and Illinois (SB2203) set the threshold at just 25 employees. Texas (HB1709) goes further by explicitly prohibiting certain AI uses, such as social scoring and manipulating human behavior.

Enforcement mechanisms vary significantly as well. While most grant exclusive authority to state attorneys general, the penalties range widely. Texas's HB1709 imposes financial penalties up to $200,000 per violation. Only a few states, including Illinois, New York, and Vermont, provide for private lawsuits.

Beyond consumer protection, some states are pursuing legislation to promote AI innovation. Washington's HB1833 and HB1942 establish AI grant programs focused on economic development, while Oklahoma's HB1916 includes provisions for an AI Workforce Development Program.

A growing subset of bills focuses specifically on government AI use. Virginia's HB2046 and SB1214 establish regulations for AI systems used by public bodies, while Texas's SB1964 requires government agencies to inventory their AI systems and conduct regular risk assessments.

Health insurance features heavily, as you might expect. Bills such as Arkansas's HB1816 seek to restrict or heavily regulate AI usage in making decisions regarding insurance coverage.

And then there's Missouri. There's always Missouri. HB1462 Establishes the "AI Non-Sentience and Responsibility Act" which, among other things, prohibits AI from getting married. I'm sure Missourians feel safer already.

This diverse legislative landscape reveals states drawing inspiration from Colorado while introducing innovations to address their specific priorities, collectively shaping not only the regulatory environment for AI but also the national conversation about balancing innovation with protection against algorithmic discrimination.

The Brussels Effect?

According to Dean W. Ball's analysis in "The EU AI Act is Coming to America", similarities in state approaches suggest coordinated efforts to import regulatory approaches from abroad, potentially adding significant compliance costs without clear evidence of widespread harm. Ball compares Colorado's act and other proposes legislation to the EU AI Act and creates a convincing argument that legislators are copying EU regulations without considering the full implications for the U.S. context - what he calls the 'Brussels Effect'. Ball notes that after nearly a year of meetings, "Colorado does not know how to implement this law. Rather than exercising legislative restraint, however, other states are piling in" with similar legislation.

A critical consideration in evaluating these bills is their potential economic impact. Estimates from the Center for European Policy Studies looking at the EU AI Act suggest similar regulatory frameworks in the U.S. could add between 5-17% to corporate spending on AI development or deployment.

The key question becomes whether algorithmic discrimination is prevalent enough to justify these costs. Ball questions this, noting limited evidence of widespread algorithmic harm. He argues that "in pure economic terms, these algorithmic discrimination would be among the most significant pieces of technology policy passed in the US during my lifetime—even if compliance costs come in at the low end of these estimates."

A Path Forward: Balancing Protection and Innovation

Most of the legislation currently being considered by legislatures around the country will fail, as most proposed legislation does. But where bills do pass, states will face an uphill battle in finding ways to implement measures effectively. Under the current administration, they are unlikely to receive any support from the White House. During his first week in office, President Trump signed an executive order revoking Biden-era programs promoting the safe use of artificial intelligence. It also calls for an AI Action Plan to “enhance America’s position as an AI powerhouse and prevent unnecessarily burdensome requirements.” States will be in no doubt that if they want any additional guardrails, they will have to create them for themselves.

The challenge for policymakers is crafting legislation that effectively prevents algorithmic discrimination without imposing prohibitive compliance costs or stifling innovation. Based on the current legislative landscape and implementation challenges, several approaches could lead to more balanced regulation:

First, lawmakers should follow Colorado's Task Force lead in refining key definitions with greater precision. The ambiguity around "substantial factor" and "consequential decision" creates significant uncertainty for businesses. Clear, narrow definitions would help companies understand when and how they must comply with these laws.

Second, states should consider expanding exemptions for small businesses and low-risk applications. The Colorado Task Force has recognized this need, suggesting a reconsideration of the current 50-employee threshold for exemptions. A more nuanced framework could better balance protection and innovation.

Third, rather than mandating specific technical approaches, legislation should establish outcome-based standards. This would provide flexibility for companies to develop innovative compliance strategies while still achieving the desired result of preventing algorithmic discrimination.

Fourth, states should adopt the Task Force's recommendation to provide cure periods before enforcement actions. This would give companies time to address compliance issues without facing immediate penalties, fostering a collaborative rather than punitive regulatory environment.

Finally, states should coordinate their efforts to avoid creating a patchwork of incompatible regulations. The current proliferation of similar but slightly different state laws threatens to create compliance nightmares for businesses operating across state lines.

As Governor Polis noted when signing Colorado's SB205, there's a delicate balance to strike: protecting consumers while avoiding stifling business innovation. The Colorado Task Force's recommendations represent a thoughtful attempt to refine the law based on practical implementation concerns. Other states would be wise to observe these developments before rushing to pass their own versions.

The coming months will be critical as these bills work their way through state legislatures and as Colorado considers amending its pioneering law. Their outcomes will significantly shape the future of AI regulation in America, potentially creating a de facto national standard even in the absence of federal legislation. The challenge for policymakers is ensuring that these regulations effectively protect consumers while allowing American companies to remain at the forefront of AI innovation.


The Battle to Regulate AI Discrimination was originally published by Bill Track50 and shared with permission.

Stephen Rogers is a Data Wrangler at BillTrack50.

Read More

U.S. President Donald Trump walks towards Marine One on the South Lawn on May 1, 2025 in Washington, DC.
U.S. President Donald Trump walks towards Marine One on the South Lawn on May 1, 2025 in Washington, DC.
Getty Images, Andrew Harnik

Congress Bill Spotlight: National Garden of American Heroes, As Trump Proposed

The Fulcrum introduces Congress Bill Spotlight, a weekly report by Jesse Rifkin, focusing on the noteworthy legislation of the thousands introduced in Congress. Rifkin has written about Congress for years, and now he's dissecting the most interesting bills you need to know about, but that often don't get the right news coverage.

What do Kobe Bryant, Dr. Seuss, Walt Disney, Alex Trebek, and Ruth Bader Ginsburg have in common?

Keep ReadingShow less
Just the Facts: Using the Military to Stop Riots

National Guard

File footage

Just the Facts: Using the Military to Stop Riots

The Fulcrum strives to approach news stories with an open mind and skepticism, striving to present our readers with a broad spectrum of viewpoints through diligent research and critical thinking. As best we can, remove personal bias from our reporting and seek a variety of perspectives in both our news gathering and selection of opinion pieces. However, before our readers can analyze varying viewpoints, they must have the facts.

Before President Trump called up the military to stop the L.A. riots this week, has the military ever been called upon to stop protests in the United States?

Keep ReadingShow less
Marines Sent to Los Angeles “Presents a Significant Logistical and Operational Challenge”

Protesters confront National Guard soldiers and police outside of a federal building as protests continue in Los Angeles following three days of clashes with police after a series of immigration raids on June 09, 2025, in Los Angeles, California.

(Photo by Spencer Platt/Getty Images)

Marines Sent to Los Angeles “Presents a Significant Logistical and Operational Challenge”

LOS ANGELES, CA - An estimated 700 U.S. Marines are being mobilized from the Marine Corps Air Ground Combat Center in Twentynine Palms, approximately 140 miles east of Los Angeles, to Camp Pendleton in San Diego County. This mobilization will position the troops closer to Los Angeles, where they may potentially work alongside National Guard units to protect federal resources and personnel, according to NBC News.

The latest figures from police, nearly 70 individuals were arrested over the weekend during protests. This total includes 29 people arrested on Saturday for failure to disperse and 21 individuals arrested on Sunday on charges ranging from attempted murder involving a Molotov cocktail to looting and failure to disperse, as reported by the LAPD.

Keep ReadingShow less
GOP Funding Bill Could Put CA Rural Health Centers, Hospitals at Risk

Medicaid, known as Medi-Cal in California, makes up about 40% of revenue for Community Health Centers, which serve almost 32 million mostly low-income people nationwide.

Arlette/Adobe Stock

GOP Funding Bill Could Put CA Rural Health Centers, Hospitals at Risk

People who depend on Community Health Centers and rural hospitals could have trouble finding care if Medicaid cuts just approved by the U.S. House are signed into law.

The nonpartisan Congressional Budget Office estimated 8 million people nationwide could lose coverage over the next decade, including more than 3 million in California.

Lizette Escobedo, vice president of government relations and civic engagement at AltaMed Health Services in Los Angeles, said the costs to treat a flood of uninsured patients would overwhelm community clinics and small town hospitals.

"If this bill were to be implemented over the next 10 years, some federally qualified health centers and hospitals especially in the rural areas would probably have to close their doors," Escobedo projected.

Supporters of the bill said the savings are needed to fund other administration priorities, including President Donald Trump's 2017 tax cuts. The bill would also tighten work requirements for Medicaid coverage and force people to reapply every six months instead of annually. And it would slash tens of billions in federal funding to states like California allowing health coverage for undocumented people.

Joe Dunn, chief policy officer for the National Association of Community Health Centers, called the proposed cuts counterproductive, in terms of keeping people healthy and keeping costs down.

"Health centers actually save money in the long run, because it reduces utilization of emergency departments and other kind of higher-cost settings, like inpatient hospitalization," Dunn explained.

The bill is now in the U.S. Senate.

GOP Funding Bill Could Put CA Rural Health Centers, Hospitals at Risk was originally published by the Public News Service and is republished with permission.

Keep ReadingShow less