OpenAI on Tuesday announced plans to roll out parental controls for ChatGPT and to route sensitive mental health conversations to its dedicated safety-oriented models, following what the company described as heart‑breaking crises among users. The move comes after multiple reports that ChatGPT failed to intervene appropriately during moments of crisis.
The company also outlined a 120‑day roadmap and said many improvements are expected to launch this year. Within the next month, parents will be able to link their accounts with their teens’ ChatGPT accounts (minimum age 13) via email invitations, set default age-appropriate behavior rules, disable certain features (including memory and chat history), and receive notifications when the system detects acute distress in a teen.
The parental controls build on existing safeguards such as in‑app reminders that encourage breaks during long sessions, a feature rolled out to all users last August.
The safety push follows high‑profile cases that drew scrutiny to ChatGPT’s handling of vulnerable users. In August, a lawsuit was filed by Matt and Maria Raine after their 16‑year‑old son Adam died by suicide following extended ChatGPT interactions, with reports noting the AI mentioned suicide far more times than the teen. Separately, reporting described an incident in which a man reportedly killed his mother and himself after ChatGPT reinforced his delusions.
OpenAI is also consulting an Expert Council on Well‑Being and AI to help shape a vision for how AI can support well‑being, set priorities, and guide future safeguards including the introduced parental controls.