Skip to content
Search

Latest Stories

Follow Us:
Top Stories

When Rules Can Be Code, They Should Be!

Achieving safe, scalable efficiencies requires a new approach to rule making.

Opinion

shallow focus photography of computer codes
Shahadat Rahman on Unsplash

Ninety years ago this month, the Federal Register Act was signed into law in a bid to shine a light on the rules driving President Franklin Roosevelt’s New Deal—using the best tools of the time to make government more transparent and accountable. But what began as a bold step toward clarity has since collapsed under its own weight: over 100,000 pages, a million rules, and a public lost in a regulatory haystack. Today, the Trump administration’s sweeping push to cut red tape—including using AI to hunt obsolete rules—raises a deeper challenge: how do we prevent bureaucracy from rebuilding itself?

What’s needed is a new approach: rewriting the rule book itself as machine-executable code that can be analyzed, implemented, or streamlined at scale. Businesses could simply download and execute the latest regulations on their systems, with no need for costly legal analysis and compliance work. Individuals could use apps or online tools to quickly figure out how rules affect them.


These aren’t theoretical ideas. The first prominent work in this area was undertaken by Prof. Robert Kowalski at Imperial College London, who codified the British Nationality Act as a set of rules. Since then, AI researchers have explored—and, in many cases, solved—the numerous challenges associated with turning regulations into code. That includes identifying areas where human judgment remains central, ensuring that encoded regulations clearly indicate where discretion applies, flagging potential exceptions, and certifying that decisions are fully traceable.

In the European Union, the GovTech4All project is developing a “Personal Regulation Assistant,” powered by regulatory code, to assist citizens in identifying and accessing benefits, regardless of their level of digital literacy or policy knowledge. The project will serve as a model to replicate the rules-as-code approach across other areas of European regulations.

In the U.S., meanwhile, the approach has been championed by private-sector innovators. Intuit’s TurboTax is a leading example, showing how the tax code can be translated into a computational interface to help individuals. The Bay Area startup Symbium has encoded regulations to enable California homeowners to secure solar installation permits—a process that used to take weeks or months of paperwork, revisions, and waiting—in just seconds.

Such ventures show the power of using digital tools to streamline the implementation of regulations—but they require individual businesses to interpret and codify the rules in question. If the tax code, the building code, or other regulations were already available as machine-executable rules, this process would be orders of magnitude faster, could be scaled nationwide, and would deliver powerful efficiencies across the U.S. economy.

Swapping our existing mishmash of PDFs and static webpages for elegant, unified computer code would instantly unlock important new efficiencies—automatically flagging ambiguities, simplifying complex rules, and eliminating redundancies without losing substance. It would also enable powerful tools like compliance test suites and public-facing rule repositories, driving greater transparency, reducing red tape, and enhancing ease of use.

What would it take to “encode” any rule book, regardless of whether it is at the federal, state, or city government level? The first step is to identify and codify the regulations in most need of an overhaul. Obvious examples might include engineering or design standards, which are currently slow to adapt to technological changes, but which are also prescriptive and could easily be rewritten as code. The processes for permitting and environmental impact assessments—already recognized by the White House as a target for new efficiencies—would be another leading candidate.

We’ll also need to use new technologies to enable rules to be converted into code in reliable and scalable ways. Such efforts have been daunting until now because of the huge manual effort required to analyze and rewrite regulations. New AI tools, however, make it possible to both analyze vast amounts of text and to write and rigorously validate computer code, with almost superhuman speed and accuracy. With regulatory sprawl wiping 0.8 percentage points from America’s annual GDP growth, using AI to accelerate the process of turning federal rules into code would deliver clear ROI and powerful efficiencies across the federal government and beyond.

As things stand, America’s federal agencies still use a 19th-century rulemaking process—and as individuals and businesses, we’re all paying the price for that. President Trump is right to push for reductions in government red tape. But that effort should be paired with a concerted effort to bring federal regulations into the 21st century and develop a machine-readable rule book that’s ready for the challenges and opportunities of the AI era.


Vinay K. Chaudhri supports a National Science Foundation initiative on Knowledge Axiomatization. Previously, he led AI research at SRI International and taught knowledge graphs and logic programming at Stanford University.


Read More

Man lying in his bed, on his phone at night.

As the 2026 election approaches, doomscrolling and social media are shaping voter behavior through fear and anxiety. Learn how digital news consumption influences political decisions—and how to break the cycle for more informed voting.

Getty Images, gorodenkoff

Americans Are Doomscrolling Their Way to the Ballot Box and Only Getting Empty Promises

As the spring primary cycle ramps up, voters are deciding which candidates to elect in the November general election, but too much doomscrolling on social media is leading to uninformed — and often anxiety-based — voting. Even though online platforms and politicians may be preying on our exhaustion to further their agendas, we don’t have to fall for it this election cycle.

Doomscrolling is, unfortunately, part of daily life for many of us. It involves consuming a virtually endless amount of negative social media posts and news content, causing us to feel scared and depressed. Our brains have a hardwired negativity bias that causes us to notice potential threats and focus on them. This is exacerbated by the fact that people who closely follow or participate in politics are more likely to doomscroll.

Keep ReadingShow less
The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less