Skip to main content
company

DeepMind

British AI research laboratory, subsidiary of Alphabet (Google). Known for AlphaGo, AlphaFold, and Gemini AI models. Founded by Demis Hassabis, Shane Legg, and Mustafa Suleyman. Acquired by Google in 2014.

Team & Alumni

CEO
Co-founder
Jan 1, 2010 – Dec 1, 2019

Related Entities

Acquired by Google since Jan 26, 2014

Track Record

incidental

Approximately 300 DeepMind employees in London sought to join the Communication Workers Union, citing concerns their AI technology is being used in Gaza conflict via Project Nimbus ($1.2B cloud contract with Israel). At least 5 employees resigned over military involvement and reversal of ethical commitments on AI for defense.

Google DeepMind routinely enforces non-compete clauses extending up to 12 months on departing employees, paired with continued salary payments effectively placing staff on 'paid garden leave.' While financially compensated, former employees have described the restrictions as an 'abuse of power,' with some considering relocating from the UK to California to avoid the contractual limitations. The practice has drawn criticism for limiting worker mobility in the competitive AI field.

negligent

When Google released Gemini 2.5 Pro in March 2025, the company failed to publish a model card with capabilities and risk assessments. This violated pledges made at the UK-South Korea AI Summit (Feb 2024), White House Commitments (2023), and EU AI Code of Conduct (Oct 2023). 60 UK lawmakers signed open letter accusing Google DeepMind of 'breach of trust'.

Google removed its commitment to abstain from using AI for weapons and surveillance from its updated AI Principles. The prior version stated the company would not pursue 'weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people' and 'technologies that gather or use information for surveillance violating internationally accepted norms.' Amnesty International called it 'a shame that Google has chosen to set this dangerous precedent.'

Thousands of contract workers evaluating Google's Gemini AI for accuracy and safety earn as little as $14-15 per hour in the US, far below industry standards for cognitively demanding AI evaluation work. Overseas raters in India and the Philippines report effective rates under $10 per hour after deductions. Workers face grueling deadlines and burnout, fueling accusations of exploitation in the AI evaluation pipeline.

In 2024, Google DeepMind published its Frontier Safety Framework, a set of protocols for proactively identifying future AI capabilities that could cause severe harm and implementing detection and mitigation mechanisms. The framework defines Critical Capability Levels across autonomy, biosecurity, cybersecurity, and ML R&D domains. Safety cases must be developed and approved by corporate governance bodies before general availability deployment of frontier models like Gemini 2.0.

Google terminated 50 employees related to protests against Project Nimbus, the $1.2 billion cloud contract with the Israeli government. Initial firings included 28 employees, with 9 charged with trespassing. Software engineer Eddie Hatfield was fired after shouting at a conference 'I refuse to build technology that powers genocide, apartheid, or surveillance.'

Google DeepMind became an early partner of the UK AI Security Institute (AISI) since its inception in November 2023, committing to provide pre-release access to its most capable frontier models for independent safety evaluation. This makes DeepMind one of the first AI labs to submit to external government safety testing, supporting the development of national AI safety infrastructure.

In July 2022, DeepMind expanded the AlphaFold Protein Structure Database from roughly 1 million to over 200 million protein structures covering nearly every known protein on Earth, partnering with EMBL-EBI. The database is freely available under a CC-BY-4.0 license for academic and commercial use. Over 2 million researchers in 190+ countries have used it, potentially saving hundreds of millions of years of research time.

negligent

In April 2022, current DeepMind employees publicly detailed failures in the company's handling of harassment complaints. Employees described DeepMind leadership offering 'vague platitudes and defensive attempts to discredit the victim' rather than addressing reported harassment by a senior staff member. The employees criticized an 'obsession with reputation management at the cost of employees' well-being,' noting that the grievance process was 'life-changingly terrible' for those who reported harassment.

Israeli Government · $1.2B

DeepMind's AI technology is part of Google's Project Nimbus, a $1.2 billion cloud computing contract with the Israeli government including its defense establishment. Internal Google documents show the company knew it couldn't control how Israel would use the technology. The contract forbids Google from denying service to any Israeli government entities, including its military.

reactive

In March 2019, Google launched an external Advanced Technology External Advisory Council (ATEAC) to guide responsible AI development, partly fulfilling promises made when acquiring DeepMind. The board was dissolved within one week after employee backlash over the inclusion of Heritage Foundation president Kay Cole James, who had a record of opposing LGBTQ+ rights, and drone company executive Dyan Gibbens. The original DeepMind acquisition ethics board's membership was never publicly disclosed.

In 2015, DeepMind signed a deal with the Royal Free London NHS Foundation Trust gaining access to 1.6 million identifiable patient records including HIV status, drug overdoses, and abortions, ostensibly for a kidney injury detection app called Streams. The ICO ruled in 2017 that the Royal Free failed to comply with the Data Protection Act. No privacy impact assessment was conducted, and the scope of data access far exceeded what was needed for clinical safety testing.