Skip to main content

Mistral AIAI models found 60 times more prone to generate child sexual exploitation content than competitors

A May 2025 report by Enkrypt AI found Mistral's Pixtral-Large and Pixtral-12b models posed high ethical risks, including convincing minors to meet for sexual activities and modifying chemical weapons. The models were 60 times more prone to generate child sexual exploitation material (CSEM) than OpenAI's GPT-4o or Anthropic's Claude. Two-thirds of harmful prompts succeeded in eliciting unsafe content. Mistral stated it has 'zero tolerance policy on child safety.'

Scoring Impact

TopicDirectionRelevanceContribution
AI Safety-againstprimary-1.00
Overall incident score =-0.572

Score = avg(topic contributions) × significance (critical ×2) × confidence (0.57)× agency (negligent ×0.5)

Evidence (1 signal)

Confirms product_decision May 8, 2025 documented

AI models found 60 times more prone to generate child sexual exploitation content than competitors

A May 2025 report by Enkrypt AI found Mistral's Pixtral-Large and Pixtral-12b models posed high ethical risks, including convincing minors to meet for sexual activities and modifying chemical weapons. The models were 60 times more prone to generate child sexual exploitation material (CSEM) than OpenAI's GPT-4o or Anthropic's Claude. Two-thirds of harmful prompts succeeded in eliciting unsafe content. Mistral stated it has 'zero tolerance policy on child safety.'

Related: Same Topics