Skip to main content
technology Support = Good

AI Oversight

Supporting means...

Supports AI accountability frameworks; backs transparency requirements for AI systems; engages constructively with AI regulation; accepts third-party auditing; supports disclosure requirements; advocates for AI governance mechanisms

Opposing means...

Seeks blanket exemptions from AI oversight; lobbies against accountability mechanisms; resists transparency requirements; opposes third-party auditing; fights AI governance frameworks; seeks to undermine AI regulatory authority

Recent Incidents

Gebru has consistently called out tech executives who pivot to AI safety narratives after building potentially harmful technologies. In December 2025, she urged the public to question such rebrandings, arguing that the AI safety discourse is being co-opted by the same actors who created the problems, diverting attention from concrete harms to speculative existential risks.

In December 2025, NHS Confederation and Limbic launched a partnership to explore responsible AI adoption in mental health services. Limbic's AI is used by 500,000+ patients across 45% of NHS England regions. The company achieved Class IIa medical device certification - the only mental health AI chatbot to do so in the UK.

In 60 Minutes interview, Amodei stated: 'I think I'm deeply uncomfortable with these decisions being made by a few companies, by a few people. And this is one reason why I've always advocated for responsible and thoughtful regulation of the technology.' When asked 'who elected you and Sam Altman?' responded 'No one, no one.'

Bengio launched LawZero, a nonprofit organization with $30 million in funding from the Future of Life Institute and Schmidt Sciences, aimed at building 'honest' AI systems that can detect and block harmful behavior by autonomous agents. The initiative is developing Scientist AI, a non-agentic system intended to act as a guardrail by predicting whether an AI agent's actions could cause harm.

In June 2025, Founders Fund led Anduril Industries' $2.5 billion Series G funding round at a $30.5 billion valuation. The $1 billion invested by Founders Fund was the largest single investment in the fund's history. Anduril manufactures autonomous weapons systems including the Altius-700M (tested with live warheads), unmanned aerial systems, and counter-UAS technology. The company is building Arsenal-1, a hyperscale manufacturing facility for autonomous weapons near Columbus, Ohio. Founder Palmer Luckey has embraced Trump's defense policies, stating it's 'good to scare people sometimes.' Founders Fund partner Trae Stephens co-founded Anduril and was considered for Deputy Secretary of Defense.

negligent

When Google released Gemini 2.5 Pro in March 2025, the company failed to publish a model card with capabilities and risk assessments. This violated pledges made at the UK-South Korea AI Summit (Feb 2024), White House Commitments (2023), and EU AI Code of Conduct (Oct 2023). 60 UK lawmakers signed open letter accusing Google DeepMind of 'breach of trust'.

Ng has consistently argued against broad AI regulation, warning that overregulation could stifle open-source innovation and benefit large incumbents. In January 2025, he expressed disappointment that Congress did not include a moratorium on state-level AI regulation in legislation, arguing that the net impact of proposed regulations was negative. He also criticized the White House Executive Order on AI for using the Defense Production Act framework.

Appointed by UK Prime Minister Rishi Sunak at the first AI Safety Summit in November 2023, Bengio chaired the International Scientific Report on the Safety of Advanced AI. The report was authored by 96 AI experts from over 30 countries plus the EU and UN, and was published in January 2025. It represented the most comprehensive international scientific assessment of advanced AI risks and safety measures.

$1.8M

OpenAI increased federal lobbying expenditure from $260,000 in 2023 to $1.76 million in 2024, a 577% increase. The company grew its lobbying team from 3 to 18 lobbyists. Key hires included former Senate staffers for Chuck Schumer and Lindsey Graham. Spending continued accelerating in 2025, reaching $2.1 million through September 2025. TIME Magazine reported OpenAI successfully lobbied to weaken EU AI Act provisions that would have classified general-purpose AI as 'high risk.'

In 2024, Khosla publicly opposed California's SB 1047 AI safety bill, writing an opinion article in The Sacramento Bee arguing the bill could harm national security. He called the bill's author, Senator Scott Wiener, 'clueless about the real dangers of AI' and 'not qualified to have an opinion' on global national security issues. While Khosla supports AI safety research funding, he opposed legislative safety frameworks that could constrain AI development.

In mid-2024, Yann LeCun vocally opposed California's SB 1047 AI safety bill. He argued that making technology developers liable for bad uses would 'simply stop technology development' and would 'certainly stop the distribution of open source AI platforms, which will kill the entire AI ecosystem.' He called the bill based on an 'illusion of existential risk pushed by a handful of delusional think-tanks' and warned that without open-source AI, 'AI start-ups will just die.'

Despite opposing broad AI regulation, Ng has been a vocal supporter of targeted transparency legislation. He endorsed California's SB 53 and New York's RAISE Act, arguing that transparency requirements for large AI platforms would give regulators better visibility into actual problems rather than hypothetical risks.

Nvidia joined the National Institute of Standards and Technology's U.S. Artificial Intelligence Safety Institute Consortium (AISIC), working to advance trustworthy AI standards. The company was among 20 organizations (alongside Anthropic, Microsoft, OpenAI) to sign the Frontier AI Safety Commitments at the 2024 Seoul AI Summit. Nvidia also developed NeMo Guardrails, open-source software for ensuring LLM responses are accurate and appropriate, and launched the NVIDIA Halos safety stack for physical AI systems like autonomous vehicles and robotics. Additionally, Nvidia partnered with NSF contributing $77 million for the Open Multimodal AI Infrastructure project led by Allen Institute for AI.

In 2024, Google DeepMind published its Frontier Safety Framework, a set of protocols for proactively identifying future AI capabilities that could cause severe harm and implementing detection and mitigation mechanisms. The framework defines Critical Capability Levels across autonomy, biosecurity, cybersecurity, and ML R&D domains. Safety cases must be developed and approved by corporate governance bodies before general availability deployment of frontier models like Gemini 2.0.

In April 2024, GitLab launched its AI Transparency Center, publicly documenting AI ethics principles, AI continuity plans, and details of third-party AI models powering GitLab Duo features. The company committed to avoiding unfair bias, clearly identifying AI model providers, and publishing processes for vendor changes. GitLab also achieved ISO/IEC 42001 certification for AI governance.

In March 2024, xAI released Grok-1 model weights under the Apache 2.0 license, and in August 2025 open-sourced Grok 2.5, with a promise to open-source Grok 3 within six months. Musk positioned this as fulfilling his commitment to open AI development, consistent with his earlier criticism of OpenAI for becoming closed-source. However, AI experts including Bruce Perens (creator of the Open Source Definition) noted that xAI released only model weights, not the training data or training process, making the 'open source' label disputed.

In 2024, Gebru publicly criticized OpenAI for refusing to disclose what data they use to train their models or the architecture of their systems, stating that they claim withholding this information is for the public's own good. She also rejected the possibility of joining OpenAI's board, calling the prospect 'repulsive' and saying any board member would face a constant uphill battle.