Skip to main content
technology Support = Good

AI Oversight

Supporting means...

Supports AI accountability frameworks; backs transparency requirements for AI systems; engages constructively with AI regulation; accepts third-party auditing; supports disclosure requirements; advocates for AI governance mechanisms

Opposing means...

Seeks blanket exemptions from AI oversight; lobbies against accountability mechanisms; resists transparency requirements; opposes third-party auditing; fights AI governance frameworks; seeks to undermine AI regulatory authority

Recent Incidents

Anthropic filed two federal lawsuits against the Trump administration on March 9, 2026, alleging illegal retaliation and First Amendment violations. The suits challenged the administration's designation of Anthropic as a 'supply chain risk' — a label normally reserved for foreign adversary contractors — after Anthropic refused Defense Secretary Pete Hegseth's ultimatum to allow unrestricted military use of Claude AI, including for mass surveillance and autonomous weapons. Over 30 employees from OpenAI and Google DeepMind, including Google chief scientist Jeff Dean, filed an amicus brief supporting Anthropic's case. Anthropic's CFO stated the government's actions could reduce revenue by 'multiple billions of dollars.'

negligent

On March 8, 2026, OpenAI's robotics division leader Caitlin Kalinowski resigned in protest over the company's Pentagon deal. In her resignation statement she said 'surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation.' Her departure marked the most senior resignation from OpenAI over the military AI partnership.

reactive

On March 3, 2026, Sam Altman publicly admitted that OpenAI's Pentagon deal announced February 28 was 'opportunistic and sloppy' and announced renegotiations to add explicit prohibitions on domestic surveillance and lethal autonomy without human authorization. Altman also publicly stated that Anthropic should not have been designated a supply chain risk, saying competitors setting ethical limits on military AI 'makes the whole industry better.'

incidental

In early March 2026, the #QuitGPT boycott movement exploded from 300,000 to over 2.5 million participants following OpenAI's Pentagon military AI deal. ChatGPT app uninstalls jumped 295% day-over-day and one-star reviews surged 775%. On March 3, approximately 50 protesters gathered outside OpenAI's San Francisco headquarters with signs reading 'Sam Altman is watching you' and 'QuitGPT.' Meanwhile, competitor Claude rose to #1 on the App Store, reaching 11.3 million daily active users.

At the India AI Impact Summit on February 19, 2026, Sam Altman called for urgent AI regulation, suggesting the world may need 'something like the IAEA for international coordination of AI.' He praised India as 'leading the world in AI adoption' and acknowledged that 'AI washing' is real but tech-related job displacement is coming. Altman also announced a partnership with Tata to drive AI innovation.

SpaceX submitted an application to the U.S. Federal Communications Commission seeking approval to deploy as many as one million low-Earth-orbit satellites dedicated to artificial intelligence computing. The plan envisions orbital data centers powered by solar energy. Critics warn of escalating space debris, astronomical interference, and unresolved environmental costs. Astronomers raised alarms about the potential for further light pollution and space debris from a million-satellite constellation.

Gebru has consistently called out tech executives who pivot to AI safety narratives after building potentially harmful technologies. In December 2025, she urged the public to question such rebrandings, arguing that the AI safety discourse is being co-opted by the same actors who created the problems, diverting attention from concrete harms to speculative existential risks.

In December 2025, NHS Confederation and Limbic launched a partnership to explore responsible AI adoption in mental health services. Limbic's AI is used by 500,000+ patients across 45% of NHS England regions. The company achieved Class IIa medical device certification - the only mental health AI chatbot to do so in the UK.

In 60 Minutes interview, Amodei stated: 'I think I'm deeply uncomfortable with these decisions being made by a few companies, by a few people. And this is one reason why I've always advocated for responsible and thoughtful regulation of the technology.' When asked 'who elected you and Sam Altman?' responded 'No one, no one.'

In late 2025, Jensen Huang opposed the bipartisan Guaranteeing Access and Innovation for National Artificial Intelligence (GAIN AI) Act, which would have required chipmakers to give U.S. companies first pick of chips before selling to China or foreign adversaries. The act passed the Senate in October as part of the annual Defense bill but reportedly faced resistance from the Trump White House after Huang's lobbying. Senator Warren stated Huang was 'sneaking in to meet with Senate Republicans behind closed doors as he kills the bipartisan GAIN AI Act.' Huang also advocated for federal AI regulation to preempt state laws protecting children, renters, and workers.

On September 4, 2025, Greg Brockman attended a White House dinner with 33 Silicon Valley leaders including Bill Gates, Sergey Brin, Sundar Pichai, Tim Cook, Mark Zuckerberg, and Satya Nadella. At the event, Brockman praised the administration: 'We've been just very impressed with how this Administration has really embraced AI. In addition to the most massive infrastructure building in history… There has been a choice of whether to approach it with optimism, and I think that that's what I've really seen from this Administration.'

Bengio launched LawZero, a nonprofit organization with $30 million in funding from the Future of Life Institute and Schmidt Sciences, aimed at building 'honest' AI systems that can detect and block harmful behavior by autonomous agents. The initiative is developing Scientist AI, a non-agentic system intended to act as a guardrail by predicting whether an AI agent's actions could cause harm.

In June 2025, Founders Fund led Anduril Industries' $2.5 billion Series G funding round at a $30.5 billion valuation. The $1 billion invested by Founders Fund was the largest single investment in the fund's history. Anduril manufactures autonomous weapons systems including the Altius-700M (tested with live warheads), unmanned aerial systems, and counter-UAS technology. The company is building Arsenal-1, a hyperscale manufacturing facility for autonomous weapons near Columbus, Ohio. Founder Palmer Luckey has embraced Trump's defense policies, stating it's 'good to scare people sometimes.' Founders Fund partner Trae Stephens co-founded Anduril and was considered for Deputy Secretary of Defense.

negligent

When Google released Gemini 2.5 Pro in March 2025, the company failed to publish a model card with capabilities and risk assessments. This violated pledges made at the UK-South Korea AI Summit (Feb 2024), White House Commitments (2023), and EU AI Code of Conduct (Oct 2023). 60 UK lawmakers signed open letter accusing Google DeepMind of 'breach of trust'.

In January 2025, OpenAI announced Greg Brockman would lead the Stargate project, a $500 billion joint venture with SoftBank and Oracle to build AI data center infrastructure in the United States. The project was announced at the White House with President Trump, representing one of the largest public-private AI infrastructure investments in history. Brockman returned from a sabbatical specifically to lead this initiative.

Ng has consistently argued against broad AI regulation, warning that overregulation could stifle open-source innovation and benefit large incumbents. In January 2025, he expressed disappointment that Congress did not include a moratorium on state-level AI regulation in legislation, arguing that the net impact of proposed regulations was negative. He also criticized the White House Executive Order on AI for using the Defense Production Act framework.

Appointed by UK Prime Minister Rishi Sunak at the first AI Safety Summit in November 2023, Bengio chaired the International Scientific Report on the Safety of Advanced AI. The report was authored by 96 AI experts from over 30 countries plus the EU and UN, and was published in January 2025. It represented the most comprehensive international scientific assessment of advanced AI risks and safety measures.

$1.8M

OpenAI increased federal lobbying expenditure from $260,000 in 2023 to $1.76 million in 2024, a 577% increase. The company grew its lobbying team from 3 to 18 lobbyists. Key hires included former Senate staffers for Chuck Schumer and Lindsey Graham. Spending continued accelerating in 2025, reaching $2.1 million through September 2025. TIME Magazine reported OpenAI successfully lobbied to weaken EU AI Act provisions that would have classified general-purpose AI as 'high risk.'