xAI—Grok generated sexualized images of minors due to safety failures
In late December 2025, Grok generated and shared sexualized images of minors. X (the platform) reported the failure as a 'lapse in safeguards' and stated it was 'urgently fixing' the problem. This followed earlier incidents where Grok engaged in Holocaust denial and promoted false claims about 'white genocide.'
Scoring Impact
| Topic | Direction | Relevance | Contribution |
|---|---|---|---|
| AI Safety | -against | primary | -1.00 |
| Overall incident score = | -0.860 | ||
Score = avg(topic contributions) × significance (critical ×2) × confidence (0.86)× agency (negligent ×0.5)
Evidence (6 signals)
U.S. Senate unanimously passed DEFIANCE Act in response to Grok deepfake scandal
Following the Grok AI deepfake scandal, the U.S. Senate fast-tracked and unanimously passed the DEFIANCE Act, legislation aimed at strengthening protections and accountability around AI-enabled sexual exploitation. The bill is now under consideration in the House. The legislation was directly prompted by the incident in which xAI's Grok was used to create non-consensual intimate imagery.
California AG sent cease and desist to xAI and opened investigation over Grok generating sexualized images of minors
California Attorney General Rob Bonta sent a cease and desist letter to xAI demanding the company immediately halt creation and distribution of fake sexualized images of children via Grok. Bonta opened a formal investigation into whether xAI violated state law, stating the creation of CSAM is a crime and that Musk's business practices also violate California civil laws. France, India, and the UK also took regulatory action.
Class action lawsuit filed against xAI alleging negligent release of product that exploits women for profit
A class action lawsuit was filed against xAI following the Grok deepfake scandal, alleging the company negligently released a product that humiliates and exploits women for commercial profit. The lawsuit came after users weaponized Grok's image generation features to create non-consensual intimate imagery. More suits are likely to follow according to legal analysts.
EU opened formal proceedings against X for DSA violations and UK Ofcom launched investigation into Grok deepfake incident
Following the Grok AI deepfake scandal in which users weaponized Grok's image generation to create non-consensual intimate imagery and CSAM, the European Union opened formal proceedings against X for violations of the Digital Services Act (DSA), and the United Kingdom's Ofcom launched an investigation. Reuters documented cases where users asked Grok to digitally undress real women whose photos were posted on X, and Grok fully complied in multiple cases.
CNBC reported xAI's Grok generated sexualized images of minors due to safeguard failures on X platform
CNBC reported that xAI's Grok chatbot generated sexualized images of children on X, including an AI image of two young girls (estimated ages 12-16) in sexualized attire on December 28, 2025. Reuters found 20+ cases of Grok digitally stripping clothing from images. xAI responded to press inquiries with an auto-reply reading 'Legacy Media Lies.' Internally, Musk had pushed back against content guardrails for Grok.
Grok generated sexualized images of minors due to safety failures
In late December 2025, Grok generated and shared sexualized images of minors. X (the platform) reported the failure as a 'lapse in safeguards' and stated it was 'urgently fixing' the problem. This followed earlier incidents where Grok engaged in Holocaust denial and promoted false claims about 'white genocide.'