Skip to main content
technology Support = Bad

Surveillance Technology

Supporting means...

Sells facial recognition or biometric surveillance to police/government; enables mass surveillance; builds predictive policing tools; location tracking without consent; provides surveillance infrastructure to authoritarian regimes; biased recognition systems; no safeguards on use

Opposing means...

Refuses police/surveillance contracts; moratorium on facial recognition sales; addresses algorithmic bias in recognition systems; advocates for surveillance regulation; transparency about surveillance capabilities; exits surveillance markets

Recent Incidents

In January 2026, reporting revealed that ICE was using a Palantir-built tool called ELITE that taps Medicaid data to identify and arrest people for deportation. The tool maps potential targets and provides 'confidence scores' for individuals' current addresses. A data-sharing agreement between ICE and the Centers for Medicare and Medicaid Services gave ICE access to personal data of nearly 80 million Medicaid patients. The Electronic Frontier Foundation challenged the use of healthcare data for immigration enforcement, arguing patients never consented to their health-related information being repurposed for deportation.

reactive

Following the Guardian's investigation revealing mass Palestinian surveillance via Azure, Microsoft ceased and disabled specified Israel Ministry of Defense subscriptions including cloud storage and AI services. Microsoft President Brad Smith stated 'We do not provide technology to facilitate mass surveillance of civilians.' This was the first known case of a US tech company withdrawing services from the Israeli military since the Gaza war began. However, Microsoft's wider relationship with the IDF remained intact, and Unit 8200 reportedly migrated the surveillance data to Amazon Web Services.

A joint Guardian/+972 Magazine/Local Call investigation revealed Microsoft provided customized Azure cloud infrastructure to Israel's Unit 8200 intelligence unit for storing recordings of millions of daily Palestinian phone calls. By July 2025, the surveillance system held 11,500 terabytes of military data stored on Azure servers in the Netherlands and Ireland. Microsoft CEO Satya Nadella met with Unit 8200's commander in late 2021 to discuss the collaboration. Sources within Unit 8200 said the data was used to research and identify bombing targets in Gaza and to blackmail Palestinians in the West Bank.

The One Big Beautiful Bill Act, signed by President Trump on July 4, 2025, includes provisions that effectively grant Anduril Industries a monopoly on new autonomous surveillance towers for US Customs and Border Protection across both southern and northern borders. CBP confirmed to The Intercept that Anduril is now the country's only approved border tower vendor. Anduril's ASTs cover an estimated 30% of the US southern land border, using AI and computer vision to detect, identify, classify, and track people crossing the border. Civil liberties groups have raised concerns about the humanitarian impact of automated border surveillance.

Founders Fund, co-founded by Palantir chairman Peter Thiel, has been a major investor in Palantir Technologies since its founding in 2003. Palantir built the ImmigrationOS platform for ICE, receiving a $30 million contract in 2025. The Electronic Frontier Foundation reported in January 2026 that ICE uses a Palantir tool that feeds on Medicaid and other government data to identify and track people for arrest. The American Immigration Council documented how the system enables mass surveillance of immigrant communities. Founders Fund's continued investment in and promotion of Palantir directly supports the expansion of government surveillance infrastructure.

Paul Graham publicly criticized Palantir Technologies over its $30 million ImmigrationOS contract with ICE, urging programmers not to work for 'the company building the infrastructure of the police state.' He pressed a Palantir executive to commit not to build things that help the government violate the US constitution.

incidental

World Uyghur Congress filed complaints in French courts (April 7 and September 8, 2025) against Huawei France, Hikvision, and Dahua for complicity in crimes against humanity against Uyghurs. Charges include genocide, human trafficking, aggravated servitude, and concealment. Allegations state Huawei participated in developing innovative police laboratories in East Turkistan, testing systems for detecting Uyghurs in crowds. Supported by Don't Fund Russian Army NGO.

Google removed its commitment to abstain from using AI for weapons and surveillance from its updated AI Principles. The prior version stated the company would not pursue 'weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people' and 'technologies that gather or use information for surveillance violating internationally accepted norms.' Amnesty International called it 'a shame that Google has chosen to set this dangerous precedent.'

negligent

Nvidia processors have been integrated into Israeli military systems, including the Elbit Systems Lanius drone which uses the NVIDIA Jetson TX2 AI processor. Nvidia has its second-largest R&D center in Israel with 13% of its global workforce based there, and collaborates with over 800 Israeli startups, some contributing to military technology. In January 2025, Nvidia announced a $500 million investment in a new AI research center near Haifa. The American Friends Service Committee (AFSC) has flagged the dual-use potential of Nvidia technologies for surveillance and military applications in the context of the Israel-Palestine conflict.

Oracle settled a class action lawsuit for $115 million after plaintiffs alleged the company surveilled consumers online and offline, compiled personal data into detailed profiles including geolocation, finances, demographics, interests, and health data, and sold those profiles to third parties. Oracle's 'coretag' tracking code was embedded on thousands of websites to intercept consumer communications. Oracle claimed to have amassed dossiers on 5 billion people. Court granted preliminary approval August 2024.

Under Ton-That's leadership, Clearview AI aggressively expanded its facial recognition platform to more than 3,100 law enforcement agencies across the United States, including the FBI and Department of Homeland Security. By 2024, law enforcement searches via Clearview AI had doubled to 2 million annually. The expansion included a $9.2 million ICE contract in 2025, with ICE personnel using the system globally. This occurred despite wrongful identification cases, including Randal Quran Reid who spent six days in jail due to a mistaken Clearview match.

In 2024, Mullvad launched DAITA (Defense Against AI-guided Traffic Analysis), a feature that adds dummy traffic patterns to prevent AI systems from identifying user behavior even on encrypted VPN connections. This addressed emerging threats from machine learning-based traffic analysis that could deanonymize users despite encryption.

In April 2024, Microsoft announced a $1.5 billion investment in G42, an Abu Dhabi-based AI company chaired by Sheikh Tahnoon bin Zayed Al Nahyan, UAE's National Security Advisor. G42's CEO Peng Xiao previously led Pegasus, a DarkMatter subsidiary involved in surveillance operations. Microsoft VP Brad Smith joined G42's board. Congressional investigators raised concerns about G42's China ties and links to human rights abuses. The deal followed a secret agreement where G42 divested from China to satisfy U.S. security concerns.

Under Bezos's leadership, Amazon developed and deployed an extensive employee surveillance system in its warehouses. In January 2024, France's CNIL fined Amazon France Logistique EUR 32 million ($35M) for an 'excessively intrusive' surveillance system that tracked worker scanner inactivity with such precision that employees could be required to justify any break lasting just minutes. U.S. Senators Blumenthal, Booker, Markey, Sanders, and Warren wrote to Bezos warning that Amazon's AI camera surveillance of delivery drivers could 'dramatically decrease Americans' ability to work, move, and assemble in public without being surveilled.' Amazon also faced criticism for its Ring doorbell partnerships with police and its Rekognition facial recognition system sold to law enforcement.

Bark Technologies monitors 3,400+ schools, assigning mental health 'risk scores' to students based on their communications. Research found 44% of schools report students contacted by police due to monitoring. GoGuardian (similar tool) flags LGBTQ+ resources and counseling sites. A trans student was reported to officials for a writing assignment about past therapy. Students report self-censoring and avoiding online mental health resources due to surveillance. Academic research found 'universal mental health screening does not improve clinical or academic outcomes and has harmful effects.'

In 2023, Palantir was awarded a seven-year £330M contract with NHS England to build a Federated Data Platform, centralizing patient data from up to 240 NHS trusts and integrated care systems. Critics raised concerns about a surveillance-focused company managing sensitive health data, including mental health records, cancer screening, and STI vaccination data. The Department of Health data showed over 300 different purposes for processing information had been created. A former NHS AI lab director who had pledged to close the COVID datastore later left to join Palantir, raising revolving-door concerns.

negligent $30.8M

In May 2023, the FTC charged Amazon with violating children's privacy law (COPPA) by retaining kids' Alexa voice recordings indefinitely and undermining parental deletion requests ($25M fine). Separately, Ring was fined $5.8M after an employee viewed thousands of videos from 81+ female users' cameras in intimate spaces. Ring's security failures from 2016-2020 also enabled hackers to access consumer accounts and cameras.

Since at least 2022, the UK Home Office has employed Anduril's Maritime Sentry Towers to detect and help intercept refugees crossing the English Channel in small boats. The towers scan for vessels at a range of over 20km from shore. Privacy International highlighted this as a case of dual-use military surveillance technology being applied to immigration enforcement, raising concerns about the use of military-grade AI surveillance against vulnerable populations seeking asylum.

reactive

In May 2022, Clearview AI under Ton-That's leadership settled the ACLU's lawsuit filed under the Illinois Biometric Information Privacy Act (BIPA). The settlement permanently banned Clearview from making its faceprint database available to most businesses and private entities nationwide, and barred sales to any Illinois entity including police for five years. A subsequent March 2025 class-action settlement granted class members a 23% equity stake in Clearview AI, valued at roughly $51.75 million, representing one of the largest biometric privacy settlements in history.