Skip to content
Search

Latest Stories

Follow Us:
Top Stories

America's Data Crisis: Saving Trusted Facts Is Essential to Democracy

Why safeguarding federal data is now critical to restoring public trust in government.

Opinion

An illustration of orange-colored megaphones, one megaphone in the middle is red and facing the opposite direction of the others.

A growing crisis threatens U.S. public data. Experts warn disappearing federal datasets could undermine science, policy, and democracy—and outline a plan to protect them.

Getty Images, Richard Drury

In March 2026, more than a hundred information and data experts gathered in a converted Christian Science church to confront a problem most Americans never see, but that shapes nearly every public debate we have. The nonprofit Internet Archive convened this national Information Stewardship Forum at their San Francisco headquarters because something fundamental is breaking: the country’s shared foundation of facts.

For decades, the United States has relied on a vast ecosystem of federal data on health, climate, the economy, education, demographics, scientific research, and more. This data is the backbone of journalism, policymaking, scientific discovery, and public accountability. It is how we know whether the air is safe to breathe, whether unemployment is rising or falling, whether a new disease is spreading, or whether a community is being left behind.


But over the past year, that foundation has proven far more fragile than anyone imagined.

Across agencies, data collections have been altered, discontinued, or quietly removed from public view. Analytic teams have been fired. Advisory committees have been disbanded. Climate and environmental datasets have vanished from federal websites. LGBTQ+ data has been dropped from surveys. Long‑running scientific projects have been defunded or re‑scoped. Even the leadership of national statistical agencies can now be replaced at will.

The open data movement—once focused on improving access and usability—has moved into a defensive mode. Thousands of volunteers have stepped up to download and archive endangered datasets. Lawsuits from physicians, farmers, and advocacy groups have restored some information. Congress has rejected some of the most extreme cuts. But the deeper lesson is unmistakable: America’s national data infrastructure can be undermined and damaged far more easily than anyone anticipated.

And when trusted data disappears, something else disappears with it: the possibility of a shared reality.

At a moment when Americans across the political spectrum already distrust institutions, losing reliable national data accelerates the slide toward fragmentation. Without a common set of facts, we cannot solve shared problems—or even agree on what those problems are.

A growing coalition of organizations, researchers, technologists, and civic leaders is working to save and preserve national data on many levels. Now it’s time to bring those lines of work together. We need a coordinated, national program to protect essential data and build alternatives where federal sources fail.

Such a program can begin by acknowledging that we cannot save everything. Data.gov, the federal portal for all the government’s public data, provides access to more than 400,000 datasets. Not all are equally important, equally used, or equally at risk. The challenge is to identify the most essential datasets—such as the ones that underpin public health, climate science, economic stability, education, and democratic accountability—and determine which are vulnerable.

A practical, scalable strategy can include several steps:

1. Track what we’ve lost. We need a thorough, AI-enabled scan of the federal data ecosystem to see what’s already been lost or changed, and set up automated monitoring to detect even subtle changes going forward.

2. Build coalitions in key domains. Public health experts know which datasets matter most to disease surveillance. Climate scientists know which environmental indicators are irreplaceable. Education researchers know which federal surveys track opportunity. These experts must work alongside data scientists, AI specialists, and philanthropic partners to map what truly counts.

3. Prioritize core datasets. Through interviews, surveys, and quantitative analysis—such as tracking citations in research or journalism—coalitions can identify a “core canon” of essential datasets in each field.

4. Assess the risks. Tools like the Data Checkup, developed by dataindex.us, can assess threats to federal datasets. This work can be automated and scaled with AI.

5. Determine the federal role. Some federal data—like satellite observations, national health surveillance, or economic indicators—cannot be replicated by states or private actors. Other data can be supplemented or replaced by state and local sources, private‑sector datasets, crowdsourcing, or nontraditional data sources.

6. Take action to save essential data. When federal data is essential, coalitions can pursue advocacy, public comments, direct engagement with agencies, or litigation. When alternatives exist, they can be developed, benchmarked, and scaled.

7. Put the data to work. The best way to defend data is to use it. Publishing use cases, visualizations, tools, and plain‑language insights helps the public see why this information matters. Generative AI can make federal and open data accessible to millions of non‑technical users.

8. Think globally. The threats to data go beyond the U.S. We need to track the international impacts of U.S. data loss, study how international sources might replace U.S. data, and share lessons learned with other countries.

9. Strengthen institutional protections. In addition to managing today’s immediate problems, we need to develop policies, laws, governance strategies, and guardrails for more stable, reliable data in the future.

10. Sustain the cycle. The threats will evolve. So must the response.

The United States needs a durable, collaborative, and forward‑looking strategy to protect the information that underpins democratic decision‑making. The alternative is a future where facts become optional—and where the loudest voices, not the most accurate data, shape public life.

This essay draws from CODE’s longer, 3,000‑word white paper detailing the full scope of the challenge and a concrete proposal for action. That white paper draws on the work of many dedicated, expert organizations that are already protecting essential data in real time. It outlines how to build an integrated, collaborative, and scalable program that unites these efforts—combining expert judgment, a clear decision framework, rapid response capacity, and both human and AI‑enabled analysis. CODE hopes that this paper can be a starting point to encourage alliances and communities of practice that bring together subject‑matter experts, data advocates, technologists, and philanthropic partners.

Read the full report with the complete proposal for action at

https://bit.ly/NationsDataProgram.

Joel Gurin is president and founder of the Center for Open Data Enterprise (CODE).


Read More

The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep Reading Show less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep Reading Show less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep Reading Show less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep Reading Show less