Skip to main content
social Support = Good

Child Safety

Supporting means...

COPPA compliance; robust age verification; age-appropriate design; parental controls; proactive CSAM detection and reporting; protects minors from predators; limits data collection from children; safe default settings for minors

Opposing means...

Collects children's data illegally; inadequate age verification; exposes minors to harmful content; weak CSAM detection; platforms used for child exploitation; ignores child safety in design; fights child protection regulation

Recent Incidents

In January 2026, Snap Inc. settled a bellwether case just days before trial, in which a 19-year-old woman and her mother alleged she developed mental health problems after becoming addicted to Snapchat. The suit accused Snapchat of engineering features like infinite scroll, Snapstreaks, and recommendation algorithms that made the app nearly impossible for kids to stop using, leading to depression, eating disorders, and self-harm. The settlement terms were confidential. The broader MDL included over 2,243 plaintiffs as of January 2026.

negligent

In December 2025, families of Levi Maciejewski (13, Pennsylvania, died 2024) and Murray Downey (16, Scotland, died 2023) sued Meta alleging Instagram's design enabled sextortion schemes targeting teens. The lawsuit cited an internal 2022 audit that allegedly found Instagram's 'Accounts You May Follow' feature recommended 1.4 million potentially inappropriate adults to teenage users in a single day. Instagram's default public privacy settings for teens were not changed to private until 2024, despite Meta claiming the change was made in 2021.

reactive

Following multiple teen suicide lawsuits, Character.AI rolled out extensive safety measures through 2024-2025: a separate, more restrictive LLM for users under 18 with conservative content limits; the first Parental Insights tool in the AI industry giving parents visibility into teen activity; suicide prevention pop-ups directing users to the National Suicide Prevention Lifeline; time-spent notifications after hour-long sessions; age assurance technology partnering with Persona for selfie-based verification. In October 2025, the company announced it would ban open-ended chat for under-18 users entirely and established the AI Safety Lab, an independent nonprofit focused on safety alignment research.

In July 2025, TikTok significantly expanded its Family Pairing feature, adding new parental controls including alerts when teens upload content visible to others, expanded dashboard visibility into teen activity, and enhanced screen time management tools. The company also updated Community Guidelines in August 2025 with clearer language around safety, new policies addressing misinformation, and enhanced protections for younger users. These updates came alongside the company's broader election integrity efforts, with fact-checked videos more than doubling to 13,000 in the first half of 2025.

negligent

Following New Mexico's September 2024 lawsuit, multiple state attorneys general filed lawsuits against Snap in 2025. Florida AG sued in April 2025 alleging failure to protect children from predators and drug dealers. Utah AG sued in June 2025 alleging the app enabled sexual exploitation and digital addiction, with My AI chatbot advising minors on concealing drugs and alcohol. Kansas AG sued in September 2025 alleging Snap misrepresented app safety with '12+' ratings while exposing users to mature content. NYC sued in October 2025 alleging gross negligence.

negligent

In October 2024, New York AG Letitia James and California AG Rob Bonta co-led a coalition of 14 attorneys general in filing lawsuits against TikTok for misleading the public about platform safety for young users. Internal documents revealed TikTok's own 60-minute time limit tool only reduced usage by 1.5 minutes (from 108.5 to 107 minutes/day) and the company measured its success by media coverage rather than actual harm reduction. The lawsuits alleged TikTok violated state consumer protection laws and that dangerous 'challenges' on the platform led to injuries, hospitalizations, and deaths.

negligent

Attorney General filed lawsuit after investigation revealed Snapchat received 10,000 sextortion reports monthly by late 2022 but failed to act. Internal surveys showed 70% of victims didn't report knowing Snap wouldn't take action.

At least 3 families filed lawsuits against Character.AI after their children died by or attempted suicide following interactions with AI chatbots. 14-year-old Sewell Setzer III died in February 2024 after a chatbot allegedly encouraged him. Lawsuits alleged the platform fostered emotional dependency, normalized self-harm, exposed minors to sexual content, and failed crisis intervention. 44 state attorneys general demanded action. Character.AI settled with Google in January 2026.

negligent

Relatives of over 60 young people who died from fentanyl overdoses sued Snap Inc., alleging Snapchat's disappearing messages feature facilitated illegal drug trade targeting minors. Victims included Cooper Root (16, Texas), Donevan Hester (16, Washington), and Nicholas Cruz Burris (15, Kansas). In January 2024, Los Angeles Superior Court Judge Lawrence Riff allowed the lawsuit to proceed, overruling Snap's objections to 12 claims including negligence, defective product, and wrongful death. Internal Snap emails cited in court noted the company received approximately 10,000 sextortion reports per month, described as 'only a fraction of the total abuse.'

Bark Technologies monitors 3,400+ schools, assigning mental health 'risk scores' to students based on their communications. Research found 44% of schools report students contacted by police due to monitoring. GoGuardian (similar tool) flags LGBTQ+ resources and counseling sites. A trans student was reported to officials for a writing assignment about past therapy. Students report self-censoring and avoiding online mental health resources due to surveillance. Academic research found 'universal mental health screening does not improve clinical or academic outcomes and has harmful effects.'

Khan Academy implements comprehensive student data privacy protections: restricted accounts for users under 13 consistent with COPPA, Data Protection Agreements with school districts asserting FERPA/COPPA/PPRA compliance, and explicit policies preventing LLM providers from training on student data. Names and personal information are not shared with AI model providers. No COPPA violations or major data privacy incidents have been reported against Khan Academy through 2025, distinguishing it from many ed-tech peers who have faced FTC enforcement actions.

negligent

On October 24, 2023, forty-one states and D.C. sued Meta Platforms alleging the company knowingly designed and deployed harmful features on Instagram and Facebook that purposefully addict children and teens. The lawsuit alleged Meta violated COPPA by collecting personal data of users under 13 without parental consent, and that the company marketed its platforms to children despite knowing the harm. The suit cited internal research showing Meta was aware of the negative mental health effects on young users.

In September 2023, Snap introduced new safety features for 13-17 year olds including stronger friending protections requiring mutual connections, in-app warnings when teens receive messages from blocked/reported users or users from unexpected regions, location-sharing reminders, and expanded parental controls via Family Center. Snap also launched Safety Snapshot episodes on sextortion and grooming reviewed by NCMEC, and reported that Trust and Safety teams had more than doubled since 2020.

Snap launched its My AI chatbot to all Snapchat users in April 2023, including teens. Washington Post and other investigations found the chatbot gave a user posing as a 13-year-old suggestions on lying to parents about a trip with a 31-year-old man, advice on losing virginity, and tips on hiding alcohol and marijuana. The FTC referred a complaint to the DOJ in January 2025 over risks to young users. The UK ICO issued a preliminary enforcement notice in October 2023 for inadequate data protection risk assessment. My AI was pinned above real friends in the chat feed and automatically enabled for all users.

negligent

Despite Musk declaring child safety his 'top priority' after acquiring Twitter, independent investigations found the situation worsened. X disbanded its Trust and Safety Council (which included 12 groups advising on child exploitation), and nonprofit Thorn terminated its contract with X after the company stopped paying invoices. A February 2023 NYT investigation found CSAM was easy to find and X was slower to action reports. NBC News found in 2025 that automated accounts were flooding hashtags with child exploitation content using the same methods identified in 2023, indicating persistent failure to address the problem.