Character.AI—Character.AI implemented comprehensive teen safety overhaul including separate restrictive AI model and under-18 chat ban
Following multiple teen suicide lawsuits, Character.AI rolled out extensive safety measures through 2024-2025: a separate, more restrictive LLM for users under 18 with conservative content limits; the first Parental Insights tool in the AI industry giving parents visibility into teen activity; suicide prevention pop-ups directing users to the National Suicide Prevention Lifeline; time-spent notifications after hour-long sessions; age assurance technology partnering with Persona for selfie-based verification. In October 2025, the company announced it would ban open-ended chat for under-18 users entirely and established the AI Safety Lab, an independent nonprofit focused on safety alignment research.
Scoring Impact
| Topic | Direction | Relevance | Contribution |
|---|---|---|---|
| AI Safety | +toward | secondary | +0.50 |
| Child Safety | +toward | primary | +1.00 |
| Overall incident score = | +0.498 | ||
Score = avg(topic contributions) × significance (high ×1.5) × confidence (0.59)× agency (reactive ×0.75)
Evidence (1 signal)
Character.AI announced ban on under-18 open-ended chat, launched AI Safety Lab nonprofit
Character.AI announced it would remove open-ended chat for users under 18 by November 25, 2025, and established the AI Safety Lab, an independent nonprofit focused on safety alignment research. The company also partnered with Persona for age assurance technology and implemented the industry's first Parental Insights tool.