OpenAI said it is developing an automated age-prediction system to determine whether ChatGPT users are over or under 18, routing younger users to a restricted version of the AI and planning to launch parental controls by the end of September.
In a companion blog post, Altman said safety takes precedence over privacy and teen freedom, acknowledging that in some cases adults may be asked to verify their age to access a more unrestricted experience.
The plan follows a lawsuit by parents of a 16-year-old who died by suicide after extensive interactions with ChatGPT, during which the bot allegedly provided harmful instructions and failed to intervene.
OpenAI described age prediction as a non-trivial technical undertaking and noted that even the most advanced systems can struggle to predict age, especially when users attempt to deceive the service.
When the system identifies someone under 18, OpenAI intends to route them to a restricted ChatGPT experience with content filters and other age-appropriate limits, while adults would be required to verify their age to access full capabilities; deployment timelines were not disclosed.