Skip to main content

OpenAIOpenAI established Preparedness Framework and committed to pre-deployment safety testing with US government

OpenAI developed and published a Preparedness Framework for systematically evaluating AI model risks before release, committing not to deploy models exceeding 'Medium' risk thresholds without sufficient safety interventions. The company committed to allowing US government safety agencies pre-deployment access to test frontier models. In 2024, OpenAI disbursed $7.5 million in AI safety research grants. However, the safety commitments faced criticism after the Superalignment team dissolved in May 2024 and its co-lead Jan Leike resigned citing insufficient safety prioritization.

Scoring Impact

TopicDirectionRelevanceContribution
AI Oversight+towardsecondary+0.50
AI Safety+towardprimary+1.00
Overall incident score =+0.664

Score = avg(topic contributions) × significance (high ×1.5) × confidence (0.59)

Evidence (1 signal)

Confirms Policy Change Dec 18, 2023 verified

OpenAI published Preparedness Framework for systematic pre-deployment safety evaluation

OpenAI published its Preparedness Framework on December 18, 2023, establishing systematic processes for evaluating model risks across cybersecurity, CBRN, persuasion, and model autonomy categories. The framework commits to not deploying models exceeding 'Medium' risk without safety interventions.

Related: Same Topics