Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Government must protect us from geolocational disinformation

Opinion

City with GPS markers
Mongkol Chuewong/Getty Images

Crampton is an adjunct professor of geography at George Washington University and a member of Scholars.org.

As artificial intelligence is becoming more powerful and is more embedded in society, global governments are beginning to regulate these types of technology, and to balance benefits with potential harms. Yet while significant attention has been paid to reducing risk in the realms of health, finance and privacy, policymakers have left one element largely unaddressed: geolocation data.


This data — which provides information on the physical location of a person or device like a smartphone — is powerful, sensitive and highly valuable. AI procedures that are already being adopted to acquire, process and automate spatial and locational data are a particular concern that call for swift action. But policymakers can simultaneously look to the future and work to ensure that we develop independent, trustworthy AI governance for geolocation by drawing on the hard-won knowledge of the spatial digital revolution of the past two decades. To realize the best outcomes on privacy, combatting disinformation and deanonymization threats, policymakers must partner with geospatial domain experts rather than legislate around them.

The current regulatory landscape

In 2023, privacy legislation protecting “sensitive information” was passed in California, Colorado, Connecticut, Delaware, Florida, Indiana, Iowa, Montana, Oregon, Tennessee, Texas and Utah. A significant number of these laws include a provision covering “precise geolocation.” Data that qualify as indicating precise geolocation are limited to a radius of 1,750 feet around their subject, the equivalent of approximately one-third of a mile — significant territory in a densely populated urban area, as this interactive map of health care facilities in Washington, D,C., shows.

In Europe, the EU AI Act has prohibited the real-time collection of biometric data, such as occurs in facial recognition technology. Concurrently, U.S. legislators have become increasingly concerned about the risks of artificial intelligence, and the White House issued an executive order calling for new standards to prevent AI bias, threats or misuse. In 2018 in Carpenter v. United States, the Supreme Court held that law enforcement agencies require a warrant to obtain location data from cell-phone towers; however, this data is imprecise, and law enforcement actors switched to using richer data sources not covered by the ruling (like app-based location data sourced directly from Google).

While these are welcome developments in the ongoing need to secure privacy, geolocation has certain unique risks that this legislation and the policymakers concerned with it have yet to address.

Risks

There are three main categories of risk for geolocation data governance: disinformation, surveillance and antitrust/concentration of market. Disinformation (e.g. fake maps and data, propaganda), bias, and discrimination raise issues of trustworthiness, privacy, and ethics. The concern for AI is not just low-quality knowledge, but low-quality learning and low-quality meaning. For example, predictive policing — where data is analyzed to predict and intervene on potential future crime —may be based on poor, false or biased data that can lead to real-world discriminatory consequences.

  • Fake data can infiltrate maps (spatial databases), intentionally or not. Fake data could be included in driving apps or autonomous vehicles to chaotic effect; AI could be applied to geospatial data in a manner that misclassifies satellite imagery.
  • Disinformation may include falsely showing a person to be at a location when they were not — known as “location spoofing” — for blackmail or to cause reputational damage. “ Deepfake geography ” involves faking that a person is not at a place they should be: Imagine truckers’ data hacked to falsely show them as having deviated from their routes.
  • Inadvertent misinformation proliferated by lack of relevant geospatial analytic expertise can lead to detrimental outcomes. Inaccuracy and uncertainty can arise from analyzing a phenomenon at the wrong spatial scale (known as the “ the Openshaw Effect ”), or not accounting for how boundaries influence the scale of the analysis of aggregated data (referred to as “modifiable areal unit problem”).

Surveillance and locational tracking, which can include wide-scale biometric identification in real time or upon review of previously gathered data, poses many threats to privacy. Inference of personally identifiable information based on geospatial data obtained through surveillance is all too easy, and can include privacy infringements like re-constituting encrypted data (deanonymization) and uncovering the identity of a person or organization that has been obscured (re-identification.)

One well-known 2013 study found that knowing just four location points was enough to re-identify 95 percent of individuals, and that even when geospatial data is less precise, it can still reliable re-identify individuals — it takes a high factor of imprecision before location data loses its power to pinpoint. A new study of metro card travel data confirmed the findings that three random location points from within a period between one minute and one hour are sufficient to identify 67 percent to 90 percent of users. And facial recognition technology, which has many clear privacy risks, is now widely employed by law enforcement.

The limits of antitrust regulation and market concentration among tech companies point to increased opportunities for large data breaches or unethical use. The market is dominated by deep-pocketed AI tech companies including OpenAI, Google and Microsoft; these companies own and control the high-tech market, especially “high compute” fields like machine learning and AI training, effectively locking out competitors.

Recommendation to begin risk mitigation

The Offices of Science and Technology Policy at the White House and Congress can hold hearings with geospatial industry and academic experts to identify current and emerging threats to privacy from geolocation data and geolocation services and analytics. The quality and efficacy of legislation will depend on collaboration with and transparency from the experts who are designing and deploying these emerging technologies.


Read More

The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less