Reported by Gold 101.3 FM UAE’s No.1 Malayalam Radio Station

As artificial intelligence becomes a bigger part of everyday life, OpenAI has introduced a new safety feature in ChatGPT aimed at supporting users during serious emotional distress and potentially life-threatening situations.

The new feature, called Trusted Contact, allows adult ChatGPT users to nominate a trusted person — such as a family member, friend or caregiver — who can be alerted if the system detects signs of severe self-harm risk during conversations with the AI chatbot.

The move represents one of the strongest steps yet by an AI company to extend digital safety measures into real-world support systems.

How the Trusted Contact Feature Works

Users can add one trusted adult contact directly through their ChatGPT settings.

If OpenAI’s automated safety systems identify conversations that may involve serious self-harm concerns, ChatGPT will first encourage the user to seek support directly. The conversation may then be reviewed by a trained human safety team.

If a serious risk is confirmed, OpenAI may send a brief alert to the designated trusted contact through email, text message or an in-app notification. The company says no private chat transcripts or detailed conversation content will be shared during the process.

Part of a Wider Safety Initiative

The new feature is part of OpenAI’s broader efforts to strengthen safety protections within ChatGPT.

Last year, the company introduced additional parental controls and distress-detection systems intended to better identify vulnerable users, especially teenagers who may be facing emotional difficulties.

OpenAI also revealed that it collaborated with more than 170 mental health experts to improve how ChatGPT handles sensitive conversations. The focus has been on de-escalation techniques, emotional support and directing users toward professional help and crisis resources when needed.

Growing Pressure on AI Companies

The launch comes at a time when technology companies are facing increasing scrutiny over how AI systems interact with vulnerable individuals.

Recent reports from major international media outlets have intensified discussions around AI responsibility, privacy and the ethical role of intervention in situations involving emotional distress, self-harm or violent thoughts.

Several major technology firms, including Meta, have also expanded AI-based safety systems aimed at protecting teenagers and at-risk users across their platforms.

The Future of AI Support Systems

OpenAI’s Trusted Contact feature reflects a broader shift in the evolution of artificial intelligence.

Future AI assistants may no longer simply answer questions or generate content. Instead, they could increasingly be expected to recognise signs of crisis, respond responsibly and help connect users with trusted people and professional support systems in the real world.