Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Liberty and the General Welfare in the Age of AI

Opinion

U.S. Flag / artificial intelligence / technology / congress / ai

The age of AI warrants asking if the means still further the ends—specifically, individual liberty and collective prosperity.

Getty Images, Douglas Rissing

If the means justify the ends, we’d still be operating under the Articles of Confederation. The Founders understood that the means—the governmental structure itself—must always serve the ends of liberty and prosperity. When the means no longer served those ends, they experimented with yet another design for their government—they did expect it to be the last.

The age of AI warrants asking if the means still further the ends—specifically, individual liberty and collective prosperity. Both of those goals were top of mind for early Americans. They demanded the Bill of Rights to protect the former, and they identified the latter—namely, the general welfare—as the animating purpose for the government. Both of those goals are being challenged by constitutional doctrines that do not align with AI development or even undermine it. A full review of those doctrines could fill a book (and perhaps one day it will). For now, however, I’m just going to raise two.


The first is the extraterritoriality principle. You’ve likely never heard of it, but it’s a core part of our federal system: one state can’t govern another; its legal authority ends at its borders. States across the ideological spectrum are weighing laws that would significantly alter the behaviors and capabilities of frontier models. While well-intentioned, many of these laws threaten to project legislation (and values) from one state into another. Muddled Supreme Court case law on this topic means that we’re unsure how exactly extraterritoriality concerns map on to the rush to regulate AI—that uncertainty is a problem.

Unclear laws hinder innovation, which is a driver of the general welfare. As things stand, the absence of a bright line as to when state authority to regulate AI begins and ends has invited state legislatures around the country to seemingly compete on which can devise the most comprehensive bill. If and when these bills find their way into law, you can bet your bottom dollar that litigation will follow.

Courts are unlikely to identify the aforementioned line. In the short run, as indicated by conflicting judicial decisions around the fair use doctrine and AI training data, they will likely develop distinct and perhaps even contradictory tests. The long run isn’t even worth considering. The regulatory uncertainty that results from even a few laws with extraterritorial effects may keep that would-be innovator from going all-in on their new idea or give pause to an investor thinking about doubling down on a startup. Those small decisions add up. The aggregate is lost innovation and, by extension, lost prosperity.

What’s more, one state effectively imposing its views on others runs afoul of individual liberty concerns. Extraterritoriality is one part of the Constitution’s call for horizontal federalism, which demands equality among the states and prohibits them from discriminating against non-residents, in most cases. When this key structural element is eroded, it diminishes one of the main ways the Founders sought to protect Americans from once again living under the thumb of a foreign power.

The second is the right to privacy. While you won’t find such a right in the Constitution. It has instead been discovered in the “penumbra” of other provisions. This general, vague right has given rise to a broader set of privacy laws and norms that generally equate privacy with restraints on data sharing. At a high-level, this approach to privacy results in siloed datasets that may contain data in different forms and at various levels of detail. In many contexts, this furthers individual liberty by reducing the odds of bad actors gaining access to sensitive information. Now, however, the aggregation of vast troves of high-quality data carries the potential to develop incredibly sophisticated AI tools. Without such data, then some of the most promising uses of AI, such as in medicine and education, may never come about. Concern for the general welfare, then, puts significant strain on an approach to privacy that decreases access to data.

Reexamining and clarifying these doctrines is overdue. It’s also just a fraction of the work that needs to be done to ensure that individual liberty and the general welfare are pursued and realized in this turbulent period.

Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.


Read More

Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less
Close up of a woman wearing black, modern spectacles Smart glasses and reality concept with futuristic screen

Apple’s upcoming AI-powered wearables highlight growing privacy risks as the right to record police faces increasing threats. The death of Alex Pretti raises urgent questions about surveillance, civil liberties, and accountability in the digital age.

Getty Images, aislan13

AI Wearables and the Rising Risk of Recording Police

Last month, Apple announced the development of three wearable smart devices, all equipped with built-in cameras. The company has its sights set on 2027 for the release of their new smart glasses, AI pendant, and AirPods with built-in camera, all of which will be AI-functional for users. As the market for wearable products offering smart-recording capabilities expands, so does the risk that comes with how users choose to use the technology.

In Minneapolis in January, Alex Pretti was killed after an encounter with federal agents while filming them with his phone. He was not a suspect in a crime. He was not interfering, but was doing what millions of Americans now instinctively do when they see state power in motion: witnessing.

Keep ReadingShow less