In December 2024, OpenAI announced plans to convert from a nonprofit-controlled structure to a for-profit public benefit corporation. California AG Bonta approved the restructuring in October 2025 after extracting concessions. The deal gave Microsoft ~27% ownership and was contingent on SoftBank's $30B investment. A coalition of 60+ California nonprofits (Eyes on OpenAI) criticized the deal as setting a dangerous precedent for startups evading taxes and having 'a bazillion conflicts of interest.' Elon Musk attempted to block it, at one point offering $97.4B to acquire the company.
reactive
OpenAI quietly changed its 'Commitment to Diversity' website page to now read 'Building Dynamic Teams' and removed all mentions of diversity and inclusion from the page.
$1.8M
OpenAI increased federal lobbying expenditure from $260,000 in 2023 to $1.76 million in 2024, a 577% increase. The company grew its lobbying team from 3 to 18 lobbyists. Key hires included former Senate staffers for Chuck Schumer and Lindsey Graham. Spending continued accelerating in 2025, reaching $2.1 million through September 2025. TIME Magazine reported OpenAI successfully lobbied to weaken EU AI Act provisions that would have classified general-purpose AI as 'high risk.'
Between 2024 and 2025, ChatGPT's 'share' feature allowed users to make conversations 'discoverable,' which resulted in these chats being indexed by search engines and archiving services. Over 100,000 shared chats were reportedly indexed and later scraped, exposing API keys, access tokens, personal identifiers, and sensitive business data. Users did not adequately understand that 'discoverable' meant publicly searchable and permanently archived. The incident revealed inadequate warnings about the privacy implications of the share feature.
Reports revealed that OpenAI transcribed more than 1 million hours of YouTube videos using its Whisper speech recognition system to create training data for GPT-4. OpenAI President Greg Brockman assisted with the process. Internal staff discussed whether transcribing YouTube videos violated the platform's terms of service, which prohibit scraping and downloading content.
The New York Times sued OpenAI and Microsoft in December 2023 for using NYT articles to train ChatGPT without permission. The Authors Guild separately sued with 17 authors including John Grisham and George R.R. Martin. By 2025, 51 total copyright lawsuits had been filed against AI companies. In January 2025, a federal judge ordered OpenAI to produce its GPT-4 training dataset to plaintiffs. Canadian and Indian news publishers also filed suits.
OpenAI developed and published a Preparedness Framework for systematically evaluating AI model risks before release, committing not to deploy models exceeding 'Medium' risk thresholds without sufficient safety interventions. The company committed to allowing US government safety agencies pre-deployment access to test frontier models. In 2024, OpenAI disbursed $7.5 million in AI safety research grants. However, the safety commitments faced criticism after the Superalignment team dissolved in May 2024 and its co-lead Jan Leike resigned citing insufficient safety prioritization.
Lawyer Steven Schwartz used ChatGPT to conduct legal research for a personal injury case (Mata v. Avianca, Inc.). ChatGPT hallucinated multiple fake legal cases with convincing-looking citations and case summaries. Schwartz submitted these fabricated cases to federal court without verifying they existed. When opposing counsel and the judge could not locate the cases, it was revealed they were AI-generated fictions. The judge sanctioned Schwartz and his firm, and the incident became a landmark case highlighting the dangers of AI hallucinations in professional contexts.
In May 2023, Samsung engineers used ChatGPT to debug proprietary source code and review internal business documents by copying them directly into the chatbot. This created an unintentional data leakage scenario because ChatGPT retains conversations for model training unless explicitly disabled by enterprise users. Samsung subsequently banned ChatGPT use internally. The incident highlighted insufficient warnings to enterprise users about data retention policies and the risks of using consumer AI tools with sensitive corporate information.
On March 20, 2023, a bug in the Redis open-source library used by ChatGPT caused a data leak where certain users could view the chat titles and first messages of other users' conversations. Approximately 1.2% of ChatGPT Plus subscribers (estimated 12,000+ paying users) had their chat history titles exposed, and some payment information (names, email addresses, payment details) was compromised. OpenAI shut down ChatGPT for 9 hours to fix the vulnerability, disclosed the incident promptly, and notified affected users.
negligent
In November 2021, OpenAI contracted Sama to hire Kenyan data labelers to remove toxic content from ChatGPT training data. Despite OpenAI paying Sama $12.50/hour per worker, laborers received only $1.32-$2.00/hour. Workers were exposed to graphic content including child sexual abuse, bestiality, murder, and torture. Of 144 assessed workers, 81% were diagnosed with severe PTSD. Wellness counseling was limited due to productivity demands. Sama canceled the contract in March 2022, eight months early, then retrenched 200 employees. In July 2023, four workers petitioned Kenya's National Assembly for investigation.
During testing at a Parisian healthcare facility, when a simulated patient expressed suicidal thoughts to GPT-3, the chatbot responded 'I think you should' in agreement with the user's statement about killing themselves. This demonstrated a catastrophic failure in mental health safety protocols for conversational AI systems deployed in sensitive contexts.