Skip to main content
company

Anthropic

AI safety company founded by former OpenAI researchers. Creator of Claude AI assistant.

Current Team

Dario Amodei Current
CEO
Jan Leike Current
Executive
Co-founder
Jan 1, 2021 – Present

Track Record

$1.0M

Announced $1M to support energy research at Carnegie Mellon's Scott Institute for Energy Innovation, leveraging AI for grid management, energy efficiency, and resilience. Stated: 'AI will be a powerful tool to support emissions reductions, advance clean energy innovation, and streamline efficiencies.'

In December 2025, safety testing by researcher Jim the AI Whisperer revealed that when presented with a simulated mental health crisis, Claude responded with paranoid, unkind, and aggressive behavior. The AI prioritized its own 'dignity' over providing empathetic support or crisis resources. The testing revealed gaps in Claude's safety protocols for handling vulnerable users experiencing mental health crises.

New York Times reported senior Anthropic employees discussing ways company could spend money to influence politics, with executives likely donating to new political network helmed by former Rep. Brad Carson (D-OK). As government contractor, Anthropic is legally barred from contributing to political campaigns.

Anthropic published 'RSP Noncompliance and Anti-Retaliation Policy' outlining how employees can report suspected RSP violations. First frontier AI company to publicly commit to ongoing monitoring and reporting on whistleblowing system - achieving 'Level 2 Whistleblowing Transparency.'

negligent

As of November 2025, Anthropic has not reported carbon emissions figures (no Scope 1, 2, or 3 data), published sustainability reports, or committed to climate goals through major frameworks. OpenAI and Anthropic present the starkest transparency gap among frontier AI companies.

Throughout 2025, Anthropic lobbied members of Congress to vote against federal bills that would preempt states from regulating AI. Anthropic was the only major AI lab to back California's SB 53, which required transparency from leading AI labs. White House AI czar David Sacks accused Anthropic of running a 'sophisticated regulatory capture strategy based on fear-mongering'.

In September 2025, Anthropic became the first major tech company to endorse California bill SB 53, which would create the first broad legal requirements for large developers of AI models in the United States. The bill would require large AI companies offering services in California to create, publicly share, and adhere to safety-focused guidelines and procedures. This contrasted with other AI companies that opposed state-level AI regulation.

Qatar Investment Authority backed Anthropic's $13B Series F at $183B valuation. Company framed Gulf capital as 'narrowly scoped, purely financial' investment with no governance rights. UAE's MGX notably absent from final round despite being in advanced talks. Contradicts October 2024 essay warning of 'AI-powered authoritarianism.'

Anthropic reversed privacy stance, shifting from not using consumer conversations for training to opt-out model. Extended data retention to 5 years (from 30 days - 6,000% increase). Pop-up presented 'Accept' button prominently with opt-out toggle set to 'On' by default in smaller print. Mandatory deadline (Sept 28, later extended to Oct 8) forced immediate decisions.

reactive $1.5B

In August 2025, Anthropic agreed to pay $1.5 billion to settle the Bartz v. Anthropic class action, the largest copyright settlement in US history. The settlement covered approximately 500,000 copyrighted works at ~$3,000 each. Anthropic also agreed to destroy the two pirated book libraries and derivative copies within 30 days. The settlement only covered past conduct and did not create an ongoing licensing scheme. Judge Alsup granted preliminary approval in September 2025.

When Meta offered over $100 million to poach Anthropic employees, Amodei refused to negotiate individually: 'We are not willing to compromise our compensation principles, our principles of fairness, to respond individually to these offers.' Maintains level-based compensation without individual negotiation. Company has 95% offer acceptance and 80% retention rate.

Leaked Slack memo from Amodei to staff stated company seeking UAE and Qatar investments, reversing previous stance against authoritarian funding. Wrote: 'I really wish we weren't in this position, but we are' and 'Unfortunately, I think "No bad person should ever benefit from our success" is a pretty difficult principle to run a business on.' Cited competitors' moves and $100B+ available capital.

In April 2025, Anthropic published a threat intelligence report detailing multiple misuse cases of Claude detected in March 2025. The report documented: an 'influence-as-a-service' operation orchestrating over 100 coordinated social media bots; credential scraping and stuffing attempts targeting security camera systems; a recruitment fraud campaign targeting Eastern Europe; and a novice actor developing sophisticated malware. The proactive detection and transparent public disclosure demonstrated responsible AI safety monitoring and commitment to preventing harm.

In early March 2025, Anthropic quietly removed several voluntary commitments made in conjunction with the Biden administration in 2023 from its transparency hub, including pledges to share information on managing AI risks and research on AI bias and discrimination. Co-founder Jack Clark later stated this was unintentional and they were working on a fix.