Character.AI implemented comprehensive teen safety overhaul including separate restrictive AI model and under-18 chat ban
Oct 29, 2025Following multiple teen suicide lawsuits, Character.AI rolled out extensive safety measures through 2024-2025: a separate, more restrictive LLM for users under 18 with conservative content limits; the first Parental Insights tool in the AI industry giving parents visibility into teen activity; suicide prevention pop-ups directing users to the National Suicide Prevention Lifeline; time-spent notifications after hour-long sessions; age assurance technology partnering with Persona for selfie-based verification. In October 2025, the company announced it would ban open-ended chat for under-18 users entirely and established the AI Safety Lab, an independent nonprofit focused on safety alignment research.