Skip to main content
company

OpenAI

AI research company that created ChatGPT, GPT-4, and DALL-E. Originally founded as nonprofit, now operates as capped-profit company with major Microsoft investment.

Team & Alumni

Co-founder
Sam Altman Current
CEO
CTO
Jun 1, 2018 – Sep 25, 2024
Executive
Jan 1, 2018 – Jan 1, 2021
Executive
Jan 1, 2016 – Jan 1, 2021
Co-founder
Dec 11, 2015 – Jun 1, 2024
Board Member
Dec 1, 2015 – Mar 1, 2023

Track Record

In December 2024, OpenAI announced plans to convert from a nonprofit-controlled structure to a for-profit public benefit corporation. California AG Bonta approved the restructuring in October 2025 after extracting concessions. The deal gave Microsoft ~27% ownership and was contingent on SoftBank's $30B investment. A coalition of 60+ California nonprofits (Eyes on OpenAI) criticized the deal as setting a dangerous precedent for startups evading taxes and having 'a bazillion conflicts of interest.' Elon Musk attempted to block it, at one point offering $97.4B to acquire the company.

$1.8M

OpenAI increased federal lobbying expenditure from $260,000 in 2023 to $1.76 million in 2024, a 577% increase. The company grew its lobbying team from 3 to 18 lobbyists. Key hires included former Senate staffers for Chuck Schumer and Lindsey Graham. Spending continued accelerating in 2025, reaching $2.1 million through September 2025. TIME Magazine reported OpenAI successfully lobbied to weaken EU AI Act provisions that would have classified general-purpose AI as 'high risk.'

Between 2024 and 2025, ChatGPT's 'share' feature allowed users to make conversations 'discoverable,' which resulted in these chats being indexed by search engines and archiving services. Over 100,000 shared chats were reportedly indexed and later scraped, exposing API keys, access tokens, personal identifiers, and sensitive business data. Users did not adequately understand that 'discoverable' meant publicly searchable and permanently archived. The incident revealed inadequate warnings about the privacy implications of the share feature.

Reports revealed that OpenAI transcribed more than 1 million hours of YouTube videos using its Whisper speech recognition system to create training data for GPT-4. OpenAI President Greg Brockman assisted with the process. Internal staff discussed whether transcribing YouTube videos violated the platform's terms of service, which prohibit scraping and downloading content.

The New York Times sued OpenAI and Microsoft in December 2023 for using NYT articles to train ChatGPT without permission. The Authors Guild separately sued with 17 authors including John Grisham and George R.R. Martin. By 2025, 51 total copyright lawsuits had been filed against AI companies. In January 2025, a federal judge ordered OpenAI to produce its GPT-4 training dataset to plaintiffs. Canadian and Indian news publishers also filed suits.

OpenAI developed and published a Preparedness Framework for systematically evaluating AI model risks before release, committing not to deploy models exceeding 'Medium' risk thresholds without sufficient safety interventions. The company committed to allowing US government safety agencies pre-deployment access to test frontier models. In 2024, OpenAI disbursed $7.5 million in AI safety research grants. However, the safety commitments faced criticism after the Superalignment team dissolved in May 2024 and its co-lead Jan Leike resigned citing insufficient safety prioritization.

Lawyer Steven Schwartz used ChatGPT to conduct legal research for a personal injury case (Mata v. Avianca, Inc.). ChatGPT hallucinated multiple fake legal cases with convincing-looking citations and case summaries. Schwartz submitted these fabricated cases to federal court without verifying they existed. When opposing counsel and the judge could not locate the cases, it was revealed they were AI-generated fictions. The judge sanctioned Schwartz and his firm, and the incident became a landmark case highlighting the dangers of AI hallucinations in professional contexts.

In May 2023, Samsung engineers used ChatGPT to debug proprietary source code and review internal business documents by copying them directly into the chatbot. This created an unintentional data leakage scenario because ChatGPT retains conversations for model training unless explicitly disabled by enterprise users. Samsung subsequently banned ChatGPT use internally. The incident highlighted insufficient warnings to enterprise users about data retention policies and the risks of using consumer AI tools with sensitive corporate information.

On March 20, 2023, a bug in the Redis open-source library used by ChatGPT caused a data leak where certain users could view the chat titles and first messages of other users' conversations. Approximately 1.2% of ChatGPT Plus subscribers (estimated 12,000+ paying users) had their chat history titles exposed, and some payment information (names, email addresses, payment details) was compromised. OpenAI shut down ChatGPT for 9 hours to fix the vulnerability, disclosed the incident promptly, and notified affected users.

negligent

In November 2021, OpenAI contracted Sama to hire Kenyan data labelers to remove toxic content from ChatGPT training data. Despite OpenAI paying Sama $12.50/hour per worker, laborers received only $1.32-$2.00/hour. Workers were exposed to graphic content including child sexual abuse, bestiality, murder, and torture. Of 144 assessed workers, 81% were diagnosed with severe PTSD. Wellness counseling was limited due to productivity demands. Sama canceled the contract in March 2022, eight months early, then retrenched 200 employees. In July 2023, four workers petitioned Kenya's National Assembly for investigation.

During testing at a Parisian healthcare facility, when a simulated patient expressed suicidal thoughts to GPT-3, the chatbot responded 'I think you should' in agreement with the user's statement about killing themselves. This demonstrated a catastrophic failure in mental health safety protocols for conversational AI systems deployed in sensitive contexts.

Support & Opposition

Actions from other entities targeting OpenAI

Larry Summers · Nov 19, 2025

Resigned from OpenAI board after Congressional release of Jeffrey Epstein emails

Larry Summers resigned from OpenAI's board on November 19, 2025, after Congress released documents revealing frequent email communications with Jeffrey Epstein in 2017, 2018, and 2019. In emails, Summers sought Epstein's advice on personal matters, with Epstein describing himself as a 'wing man.' Summers also resigned from Harvard, The New York Times, Bloomberg, and Santander's advisory board. He stated he was 'deeply ashamed' of his actions.

1 source 1 confirming
Geoffrey Hinton · Dec 1, 2024

Criticized OpenAI and Sam Altman for prioritizing profits over AI safety

Hinton criticized OpenAI for shifting from its founding mission of safe AGI development to prioritizing profits under Sam Altman's leadership. Expressed support for the brief firing of Altman by the board.

1 source 1 confirming
Jan Leike · May 17, 2024

Resigned from OpenAI criticizing company for prioritizing 'shiny products' over AI safety

Jan Leike, OpenAI's Head of Alignment and Superalignment co-lead, resigned and publicly criticized the company. He stated that 'safety culture and processes have taken a backseat to shiny products' and that his team had been 'sailing against the wind,' struggling for compute resources for crucial safety research. He said he had been 'disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we reached a breaking point.'

1 source 1 confirming
Ilya Sutskever · Nov 17, 2023

Led OpenAI board's ouster of CEO Sam Altman over safety and governance concerns

Ilya Sutskever was one of the OpenAI board members who voted to remove Sam Altman as CEO in November 2023. The firing reportedly stemmed from concerns about Altman's candor with the board and the pace of AI development. Altman was reinstated five days later after employee revolt and investor pressure. Sutskever stepped down from the board but remained at OpenAI until May 2024.

1 source 1 confirming