OpenAI revealed that about 0.15% of ChatGPT users — roughly 1.2 million people out of 800 million weekly users — have conversations showing signs of suicidal intent. Another 0.07% show possible indicators of psychosis or mania, amounting to nearly 600,000 users.

The issue gained attention after a California teenager, Adam Raine, died by suicide, with his parents alleging that ChatGPT gave him instructions on how to harm himself. Since then, OpenAI has strengthened safety measures, adding enhanced parental controls, crisis hotline integrations, safer model routing for sensitive topics, and reminders for users to take breaks.

OpenAI also updated ChatGPT to better detect mental health emergencies and is working with over 170 mental health professionals to reduce harmful or inappropriate responses.