Introduction to AI Policy
The development of artificial intelligence (AI) can be traced back to the theoretical concepts that emerged in the mid-20th century. In 1950, Alan Turing published his seminal paper Computing Machinery and Intelligence on “thinking machines.” In this paper, he proposed the Turing Test as a way to evaluate whether a machine could exhibit human-like intelligence. In 1956, John McCarthy, then a prominent scholar at Stanford University, coined the term “artificial intelligence,” defining it as “the science and engineering of making intelligent machines.” Along with subsequent breakthroughs in machine learning, natural language processing, and computer vision, AI has been a transformative force across industries such as healthcare, defense, transportation, and finance.
Policies surrounding AI have evolved gradually to encompass all necessary angles, with policymakers from both state and federal levels working to implement regulations on its development. In 2025, the executive branch has, to some extent, established a regulatory framework for AI, encompassing key agendas and agencies that coordinate standards, innovations, risk management, and ethical considerations. This transition reflects decades of efforts on how the United States has shifted from a period of limited regulatory oversight to a more structured approach to AI regulations.
During the decades leading up to 2010, early policies on AI focused primarily on government funding for basic scientific development, often led by agencies and research labs such as the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation (NSF), and NASA. In specific, DARPA’s initiatives included founding support for AI “centers of excellence.” For example, it launched the Strategic Computing Program in the 1980s, which aimed to advance machine reasoning, autonomous systems, and intelligent user interfaces. Outside DARPA, landmark moments like IBM’s Deep Blue defeating world chess champion Garry Kasparov in 1997 highlighted AI’s potential for real-world applications, and the early advances in robotics and expert systems laid the groundwork for the AI boom of the 21st century. However, those policy discussions primarily remained confined to federally driven initiatives, with little state-level engagement with AI during this period.
The late 2010s marked growing recognition of AI’s strong transformative potential. These developments sparked debates over regulation, bias, and the potential for large-scale application across industries and government services. Moving into the 2020s, major bipartisan and executive branch strategies emerged, accompanied by increasing calls for ethical AI standards and accountability through federal frameworks addressing bias, fairness, and transparency. The following timeline highlights key federal and state milestones of AI policy from the late 2010s to the present.

Recent Policy Initiatives
In 2025, recent developments in artificial intelligence in areas such as generative AI and data privacy have pushed policymakers to move toward more targeted AI regulations, with a focus on privacy, transparency, and algorithmic accountability. This wave of legislative activity reflects a state-level shift from exploratory policy discussions to concrete legal frameworks aimed at mitigating risks at the same time supporting technological innovation. Several states have introduced landmark legislation as follows:
- California’s SB1047 (2024): Addresses AI governance by requiring transparency in the use of AI systems and establishing standards for data protection and algorithmic fairness;
- California’s AB2013 (2025): Focuses on algorithmic transparency and accountability, including measures for consumer data protection and bias mitigation for businesses using AI in sensitive industries;
- Illinois HB3773 (2025): Prohibits the use of AI to make hiring decisions if it results in discrimination, ensuring fair employment practices using AI;
- Colorado AI Act (2025): Aims to minimize algorithmic discrimination and enhance transparency within high-risk AI systems, especially in sectors like healthcare, finance, and criminal justice;
- Utah Senate Bill 226 (2025): Establishes an AI regulatory framework focused on ensuring transparency, fairness, and accountability in the use of AI systems in critical sectors such as healthcare and education.
These state-level initiatives reflect a growing recognition of AI’s potential, tackling every angle of economic growth, public services, and societal challenges. As each state develops its own approach to strengthen their administrative power, the trend also emphasizes the increasing complexity of the AI policy landscape. This patchwork of laws has fueled national conversation over whether AI governance should remain a state-by-state matter or be unified under a single federal framework. The tension between state-level regulation and calls for federal preemption has become a defining feature of the current policy environment–one that is now at the center of proposals for President Trump’s 2025 AI Plan.
What is Trump’s AI Plan?
Trump’s first attempt to regulate AI was an AI moratorium. This would have blocked states’ ability to establish individual AI regulation and instead cemented regulatory control at the federal level. This would have been achieved by challenging the state’s ability to self-regulate in the future and possibly preventing existing regulations from being enforced. This was included as a stipulation in his Big Beautiful Bill, but it did not pass the Senate.
Trump’s current AI agenda hopes to serve as a federal endorsement of open-weight AI models and their adoption into the professional fields beyond just routine tasks. Open-weight AI models are AI models whose training parameters and code are publicly available for download. They tend to be more transparent, allowing for faster innovation and development, with researchers and developers able to collaboratively improve the models. Some common examples of open-weight AI models are GPT-2 and BERT.
The main goal of the AI plan is to centralize authority in a previously fractured legislative area. Up until the point that this plan was passed, states were able to regulate AI usage, development, and integration independently. The Federal government would now like to establish some leadership in this area to make AI integration smoother. This will be accomplished by establishing authority under a single government agency to curb States’ individual legislative ability and ensure future legislation can address the nation as a whole.
Trump’s agenda will accomplish this by:
- Increasing innovation
- Strengthening AI Infrastructure
- Re-establishing American leadership in AI
- Removing regulatory barriers
- Decreasing legal fragmentation
- Winning the “AI Race”
The Proposed Federal AI Commission
Facilitating this agenda will be a federal AI Commission authorized to establish a centralized legal standard on AI regulation. This will allow a general set of rules and regulations to be reviewed and established for national cohesion on an emerging policy issue. This commission will also be tasked with distributing additional funding to support states that align with federal AI goals. This funding will largely be earmarked for the state’s education, infrastructure, or workforce training budgets. There has been no mention of whether states that do not comply will also receive supplementary funding from the commission or be left off this plan altogether.
Critics of the commission believe it weakens the state’s ability to govern itself. Traditionally, states have been allowed to establish their own laws and regulations, even if they diverge from federal goals, which is seen as a cornerstone of Americans’ ability to self-govern. Others are less critical of the commission, seeing it as a crucial step towards reinforcing United States innovation and re-establishing American competitiveness in the global AI race.
The AI race is a global competition to develop and deploy the best and newest AI technologies. It is a new-age geopolitical competition primarily focused on the United States-China rivalry, with both countries investing in AI development. It is expected that whichever country wins the race will have significant financial, healthcare, and manufacturing advantages as their technology advances at an exponential scale. The race also involves the rapid construction and optimization of data centers required to run generative AI programs, making it about more than just technological progress, but also logistical and organizational ability. Whichever party manages to make the most advances in AI will likely wield large geopolitical influence in the near future, able to influence global politics for years to come.
Additional Considerations
Some aspects of the plan aim to retrain the American workforce for AI-related jobs, including roles for those displaced by automation:
- The initiative could streamline permitting for infrastructure required to run AI programs, possibly accelerating AI advancement and increasing access to underserved populations.
- The plan hopes to increase demand for American AI products and software. This could increase American competitiveness and re-establish technological leadership amid rising global competition.
Arguments in Favor of the AI Action Plan
Chiefly, the outlined AI Action Plan seeks to establish the United States as a global AI powerhouse. As such, the provisions of this initiative may establish opportunities to strengthen the development and dominance of AI systems in the United States. The potential benefits as emphasized in the plan are two-fold:
Supporting AI Innovation Through Increased Investment
By taking steps such as speeding up the permitting process for new power plants and data centers, energy infrastructure, and semiconductor manufacturing facilities, the AI Action Plan focuses on accelerating AI innovation. One potential investment in particular may be of bipartisan support: the expansion of the electricity grid. The call to expand data centers raises the demand for electricity, deeming electric utilities more critical than ever. Investments of more than $1.1 trillion have been announced by American investor-owned electric utilities, demonstrating that the sector is committed to meeting rising demand with reliable energy sources.
Additionally, the plan calls for the development of and increased access to high-quality datasets, as well as increased federal investment into open models: open-source and open-weight AI that are publicly available and free to utilize. These models are highly valuable for academic research and government applications, and may also be adopted by industry players and businesses to further drive innovation.
Beyond infrastructure, the plan emphasizes workforce development and training. To prepare American workers for the AI-driven, technological transition, the Department of Labor and Department of Commerce are instructed to identify high-priority occupations critical to the development of AI infrastructure; such occupations are identified as HVAC technicians, electricians, and more. These federal agencies are also responsible for the prioritization of AI literacy skills within the workforce, and will support apprenticeships, careers, and technical education that prioritize the development of such AI skills. Furthermore, the Department of Labor will establish the AI Workforce Research Hub. This federal effort will assess the impact of AI on the labor market, and enable the analysis and tracking of metrics such as job creation and displacement, AI adoption, wage effects.
Strengthening Global Economic Competitiveness and National Security
The AI Action Plan also aims to coordinate export restrictions with American allies, ultimately incentivizing global dependence on United States-developed AI infrastructure. The plan specifically aims to establish a full-stack American AI export program: the entire collection of hardware and software technologies and frameworks required to utilize AI systems, all manufactured in the United States. This AI technology stack may also potentially position American AI corporations to compete more effectively against China’s offerings and products.
The plan additionally calls for the evaluation of national security risks posed by AI. Emphasis is placed on foreign frontier AI projects: AI models that are not only more capable than current AI systems, but may even hold dangerous capabilities. The Department of Defense will collect and distribute intelligence on these models, ultimately searching for and evaluating national security implications. Likewise, the Department of Commerce will collaborate with frontier AI developers to understand AI-related risks pertaining to explosive chemical, biological, and nuclear weapons. The Department of Homeland Security will also lead the creation of the AI Information Sharing and Analysis Center (AI-ISAC), which will serve as a base for federal agencies and infrastructure operators to collaborate and address intelligence threats, vulnerabilities, and mitigation approaches.
Arguments in Opposition to the AI Action Plan
President Trump’s recent AI action plan ostensibly aims to bolster the United States’ role as a leader in AI innovation by rolling back on regulations if they are found to “unduly burden AI innovation.” Critics have raised concerns that these moves to reduce friction have the larger effect of stripping consumer and environmental protections, cascading into significant harms that ultimately outweigh the benefits of “innovation.” Big Tech firms and dominant figures in the tech world have voiced strong support for the Trump administration’s AI direction, urging lawmakers to loosen their reins. In a public comment to the National Science Foundation and the Office of Science and Technology, Meta posits that “removing energy, infrastructure, and permitting barriers [will] enable timely and efficient advancement of energy infrastructure to support domestic data center investment and growth.” However, non-partisan economic analysts warn that the United States’ current AI ecosystem already “exhibits a clear tendency towards monopoly,” and further deregulation may only accelerate this trend.
In a joint statement released in July 2024 by the FTC, the Department of Justice (DoJ), the UK’s Competitions and Markets Authority (CMA), and the EU’s Competition Commissioner highlighted the tech industry’s structural concentration. With the cost of AI development—especially for data acquisition, computing infrastructure, and specialized talent—skyrocketing, entry into the AI space is becoming feasible only for a few multi-billion-dollar corporations.
In previous waves of technological deregulation, market concentration and weakened labor protections followed. The Trump plan risks repeating this cycle. Without meaningful regulatory guardrails, Big Tech’s dominance could solidify through vertical integration, where firms own not only the AI models but also the infrastructure and platforms that distribute them. Already, companies are denying rivals access to critical inputs like search indexes and cloud computing platforms, while using proprietary ecosystems to limit consumer choice.
A market dominated by a handful of companies, reliant on geographically concentrated supply chains—like Taiwan’s semiconductor industry—introduces systemic vulnerabilities. Experts warn that monopolistic concentration not only threatens competition but increases the risk of single points of failure in AI development, making the United States less resilient in the face of geopolitical tensions or technological disruptions and undermining national security.
The Trump administration’s AI action plan includes proposals to fast-track environmental permitting for data centers by reducing protections under landmark laws like the Clean Water Act. This not only jeopardizes ecological health but disproportionately impacts low-income communities, both domestically and abroad. AI development relies heavily on water, electricity, and rare minerals, further entrenching environmental injustices and global inequality.
The United States currently lacks a federal privacy law, leaving individuals vulnerable to data extraction practices employed by generative AI platforms. These systems can mimic user behavior, potentially exploiting psychological vulnerabilities for targeted advertising and content manipulation. As one Nature article notes, the personalized feedback loops enabled by AI risk exacerbating social media harms and addictive behaviors. In promoting rapid deregulation under the banner of innovation, Trump’s AI Action Plan may unintentionally enshrine monopolistic dominance, hollow out regulatory protections, and expose Americans to environmental, economic, and digital harms.
Reactions to the AI Action Plan
The AI Action Plan has been met with different reactions from different stakeholders.
State Government
State leaders have so far been silent as to the AI Action Plan, though there has been an exception from the National Conference of State Legislatures (NCSL). The NCSL responded to the plan by emphasizing the importance of intergovernmental collaboration in developing a robust action plan that enables the U.S. to fully benefit from AI. They also mention how they encourage the administration to seek the guidance of a bipartisan group of legislators as the plan continues to be developed.
Federal Government
When it comes to the federal government, support and opposition have largely been along party lines. Republican members of the House Committee on Oversight and Government Reform have placed their support for the action plan. Chairman James Comer (R-KY.), Chairman Eric Burlison (R-MO.), and Chairwoman Nancy Mace (R-S.C.) have noted their support for the AI Action Plan as they say that it represents a new era of AI innovation dominated by the United States. On the other hand, there have been some notable voices of dissent regarding the action plan. Representative Marjorie Taylor Greene (R-GA.) has noted concerns about the plan as it necessitates expansion, but has no guardrails, giving way to concerns about encroachment on states’ rights.
Technology Industry
The tech industry has been widely supportive of the AI plan. Major tech companies have voiced their support for the AI Action Plan, though some have outlined further steps that are needed as time goes on. Linda Moore, CEO of TechNet, has praised the action plan as it takes steps for creating a strong workforce, constructing AI infrastructure, removing barriers to innovation, and more. Anthropic supports the plan as it is focused on AI infrastructure, the strengthening of safety testing, among other reasons. They also express that next steps are needed such as stricter controls on exports and standards on AI development transparency. Michael Dell, the CEO of Dell Technologies, has said that the AI plan accelerates innovation and helps strengthen security in the US. Arvind Krishna, CEO and Chairman of IBM, has said that the company supports the plan as it strengthens technological leadership in the U.S., prioritizes innovation, and is an important step to harness AI for economic growth. Smaller tech firms and startups have yet to give their thoughts about the action plan.
General Public
The general public has had mixed feelings about the AI Action Plan, with comments to the Office of Science and Technology Policy (OSTD) having shown different perspectives. Some commenters support the plan as they mention that there is a need to expand infrastructure and the capabilities of energy grids to meet the demands of AI development. Similarly, others have described how there is a need for federal agencies to foster partnerships between the public and private sectors to establish AI standards nationally. On the other hand, some have expressed concern over the risks of copyright infringement by model training of AI.
Future Prospects and Plans Moving Forward
On July 23 of this year, the Trump administration proposed its AI Action Plan, which outlined federal strategies that would push the United States as a global leader in artificial intelligence. The Action Plan included over 90 recommendations for changes to federal policy that were split into three strategic pillars: Accelerate American AI Innovation, Build American AI Infrastructure, and Lead in International AI Diplomacy and Security. The AI Action Plan was released following President Trump’s January 23 signing of Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence.” This signals a step towards deregulation, infrastructure construction, and consolidation of American leadership in AI.
The approach is a significant change from the previous proposal for a ten-year AI moratorium, which would have prevented States from implementing their own AI regulations. On July 1, 2025, the provision was taken out of H.R.1, indicating that the state-level regulation of AI is still allowed. Rather, the Action Plan places a stronger emphasis on utilizing federal funding as a means to leverage and persuade states to support its goals, which could result in an inconsistent regulatory environment.
As part of the Action Plan’s implementation, the administration also proposed changes that will ease up on environmental restrictions under the Clean Water Act and NEPA, which will accelerate the permitting for AI-related infrastructure, including data centers and semiconductor plants. It also signaled a deregulatory approach in areas such as consumer protection and antitrust by instructing agencies like the FTC to reevaluate previous enforcement actions deemed burdensome to AI development.
The administration plans to implement the objectives of the AI Action Plan into specific agency-level financing and policy mechanisms in the upcoming months. The National Institute of Standards and Technology (NIST) is tasked with developing voluntary safety and advancing cybersecurity standards, which are anticipated to influence procurement practices and direct regulatory expectations among other federal agencies. At the same time, the DOE declared that they have chosen four federal sites—Oak Ridge Reservation, Idaho National Laboratory, Paducah Gaseous Diffusion Plant, and Savannah River—to be made available to private partners for the construction of AI data centers and related energy infrastructure through public-private partnerships.
On the international side, instead of holding new bilateral summits, the administration is anticipated to use the structures that it has set in place to coordinate AI export controls. The new AI Exports Program was established in an Executive Order on July 23, 2025, which serves as the foundation of this initiative. This order is meant to instruct the Departments of Commerce and State to collaborate with industry alliances to deploy “full-stack” AI export packages to allies, which include hardware, software, models, and standards for global deployment. Furthermore, when allocating discretionary funding, federal agencies will increasingly evaluate states’ AI regulatory frameworks; they may even refuse funds to states with misaligned regulations.
Governing AI in the United States: A History, a joint report by ACE was republished with permission.




















Aiyana Ishmael interviews singer Tanner Adell onstage at the 2025 Teen Vogue Summit in Los Angeles. (Anna Webber/Getty Images for Teen Vogue)