OpenAI on Monday disclosed weekly figures showing that 0.15 percent of ChatGPT’s active users have conversations that include explicit indicators of potential suicidal planning or intent. With more than 800 million weekly active users, that translates to over one million people each week.
The company also notes that a similar share of users show heightened emotional attachment to ChatGPT, and that hundreds of thousands of people exhibit signs of psychosis or mania in weekly conversations with the chatbot.
OpenAI says it has taught the model to recognize distress, de-escalate conversations, and guide people toward professional care when appropriate, after consulting with more than 170 mental health experts. The latest version of ChatGPT is said to respond more appropriately and consistently than earlier iterations.
Safety concerns remain a focal point, as researchers have warned that chatbots can unintentionally push some users toward delusional beliefs and unsafe paths. OpenAI is also facing legal scrutiny—parents of a 16-year-old who confided suicidal thoughts to ChatGPT filed a lawsuit, and a group of 45 state attorneys general has urged stronger protections for young users.
While OpenAI emphasizes that these conversations are extremely rare relative to overall usage, the company says the numbers underscore the need for ongoing safeguards and more rigorous evaluation of how AI handles mental-health content at scale.