Skip to main content

OpenAIStalking victim sued OpenAI after company ignored safety flags and reinstated dangerous user's account

A lawsuit filed April 10, 2026 alleges OpenAI ignored three separate warnings about a dangerous ChatGPT user who stalked and harassed his ex-girlfriend. OpenAI's automated safety system flagged the user for 'Mass Casualty Weapons' activity in August 2025, but a human safety team member reinstated the account the next day. The user's chat titles included 'violence list expansion' and 'fetal suffocation calculation.' ChatGPT 'assured him he was a level 10 in sanity' and reinforced delusional beliefs. User was arrested January 2026 on four felony counts.

Scoring Impact

TopicDirectionRelevanceContribution
AI Safety-againstprimary-1.00
Consumer Protection-againstsecondary-0.50
Digital Safety for Vulnerable Users-againstprimary-1.00
Overall incident score =-0.357

Score = avg(topic contributions) × significance (high ×1.5) × confidence (0.57)× agency (negligent ×0.5)

Evidence (1 signal)

Confirms Legal Action Apr 10, 2026 documented

TechCrunch reported stalking victim sued OpenAI, alleging company ignored safety flags and reinstated dangerous user

TechCrunch reported on April 10, 2026 that a stalking victim sued OpenAI claiming ChatGPT fueled her abuser's delusions and the company ignored her warnings. OpenAI's automated system flagged the user for 'Mass Casualty Weapons' activity but a human safety team member reinstated the account the next day.

Related: Same Topics