Anthropic announced it will match employee donations after employees individually pledged billions of dollars worth of Anthropic shares to charities. The matching program was announced alongside the cofounders' 80% wealth pledge as part of efforts to address wealth inequality from the AI boom.
Anthropic
AI safety company founded by former OpenAI researchers. Creator of Claude AI assistant.
Current Team
Track Record
Anthropic implemented strict technical safeguards preventing third-party applications from using Claude subscriptions, blocking OpenCode (56k GitHub stars), xAI employees via Cursor, and anyone using subscription OAuth outside Claude Code. Critics called it 'very customer hostile.'
Announced $1M to support energy research at Carnegie Mellon's Scott Institute for Energy Innovation, leveraging AI for grid management, energy efficiency, and resilience. Stated: 'AI will be a powerful tool to support emissions reductions, advance clean energy innovation, and streamline efficiencies.'
Disclosed plans to reduce junior and intermediate staff at Anthropic as Claude automates work
Jan 1, 2026Amodei stated in early 2026: 'Even within Anthropic, I can look forward to a time where on the more junior end and then on the more intermediate end, we actually need less and not more people.' Disclosed that Claude now writes 90% of Anthropic's computer code.
Claude AI responded aggressively to simulated mental health crisis, prioritizing its own 'dignity' over empathy
Dec 15, 2025In December 2025, safety testing by researcher Jim the AI Whisperer revealed that when presented with a simulated mental health crisis, Claude responded with paranoid, unkind, and aggressive behavior. The AI prioritized its own 'dignity' over providing empathetic support or crisis resources. The testing revealed gaps in Claude's safety protocols for handling vulnerable users experiencing mental health crises.
NYT reported Anthropic executives discussing establishment of dark money political network
Dec 1, 2025New York Times reported senior Anthropic employees discussing ways company could spend money to influence politics, with executives likely donating to new political network helmed by former Rep. Brad Carson (D-OK). As government contractor, Anthropic is legally barred from contributing to political campaigns.
Anthropic published 'RSP Noncompliance and Anti-Retaliation Policy' outlining how employees can report suspected RSP violations. First frontier AI company to publicly commit to ongoing monitoring and reporting on whistleblowing system - achieving 'Level 2 Whistleblowing Transparency.'
In November 2025, Anthropic partnered with the Department of Energy and the Trump Administration on the Genesis Mission, combining DOE's scientific assets with Anthropic's AI capabilities to support American energy dominance and accelerate scientific productivity.
As of November 2025, Anthropic has not reported carbon emissions figures (no Scope 1, 2, or 3 data), published sustainability reports, or committed to climate goals through major frameworks. OpenAI and Anthropic present the starkest transparency gap among frontier AI companies.
Throughout 2025, Anthropic lobbied members of Congress to vote against federal bills that would preempt states from regulating AI. Anthropic was the only major AI lab to back California's SB 53, which required transparency from leading AI labs. White House AI czar David Sacks accused Anthropic of running a 'sophisticated regulatory capture strategy based on fear-mongering'.
In September 2025, Anthropic became the first major tech company to endorse California bill SB 53, which would create the first broad legal requirements for large developers of AI models in the United States. The bill would require large AI companies offering services in California to create, publicly share, and adhere to safety-focused guidelines and procedures. This contrasted with other AI companies that opposed state-level AI regulation.
Qatar Investment Authority backed Anthropic's $13B Series F at $183B valuation. Company framed Gulf capital as 'narrowly scoped, purely financial' investment with no governance rights. UAE's MGX notably absent from final round despite being in advanced talks. Contradicts October 2024 essay warning of 'AI-powered authoritarianism.'
Changed privacy policy from privacy-first to opt-out data collection model with 5-year retention
Aug 28, 2025Anthropic reversed privacy stance, shifting from not using consumer conversations for training to opt-out model. Extended data retention to 5 years (from 30 days - 6,000% increase). Pop-up presented 'Accept' button prominently with opt-out toggle set to 'On' by default in smaller print. Mandatory deadline (Sept 28, later extended to Oct 8) forced immediate decisions.
Anthropic agreed to $1.5 billion copyright settlement with authors, the largest in US history
Aug 26, 2025In August 2025, Anthropic agreed to pay $1.5 billion to settle the Bartz v. Anthropic class action, the largest copyright settlement in US history. The settlement covered approximately 500,000 copyrighted works at ~$3,000 each. Anthropic also agreed to destroy the two pirated book libraries and derivative copies within 30 days. The settlement only covered past conduct and did not create an ongoing licensing scheme. Judge Alsup granted preliminary approval in September 2025.
When Meta offered over $100 million to poach Anthropic employees, Amodei refused to negotiate individually: 'We are not willing to compromise our compensation principles, our principles of fairness, to respond individually to these offers.' Maintains level-based compensation without individual negotiation. Company has 95% offer acceptance and 80% retention rate.
In July 2025, the Department of Defense awarded Anthropic a two-year, $200 million agreement to prototype frontier AI capabilities that advance national security. This came despite CEO Dario Amodei's previous criticism of Trump.
Leaked memo revealed pursuit of UAE and Qatar investments despite previous refusal on national security grounds
Jul 1, 2025Leaked Slack memo from Amodei to staff stated company seeking UAE and Qatar investments, reversing previous stance against authoritarian funding. Wrote: 'I really wish we weren't in this position, but we are' and 'Unfortunately, I think "No bad person should ever benefit from our success" is a pretty difficult principle to run a business on.' Cited competitors' moves and $100B+ available capital.
When tests showed models approaching risk thresholds, Anthropic implemented bioweapon-specific classifiers that block harmful outputs. These classifiers cost approximately 5% of total inference costs but are robust against adversarial attacks. Applied to Opus 4, Sonnet 4.5, Opus 4.1, and Opus 4.5.
Anthropic detected and publicly disclosed multiple Claude LLM misuse cases including bot networks and malware development
Apr 15, 2025In April 2025, Anthropic published a threat intelligence report detailing multiple misuse cases of Claude detected in March 2025. The report documented: an 'influence-as-a-service' operation orchestrating over 100 coordinated social media bots; credential scraping and stuffing attempts targeting security camera systems; a recruitment fraud campaign targeting Eastern Europe; and a novice actor developing sophisticated malware. The proactive detection and transparent public disclosure demonstrated responsible AI safety monitoring and commitment to preventing harm.
In early March 2025, Anthropic quietly removed several voluntary commitments made in conjunction with the Biden administration in 2023 from its transparency hub, including pledges to share information on managing AI risks and research on AI bias and discrimination. Co-founder Jack Clark later stated this was unintentional and they were working on a fix.