OpenAI Adds ‘Trusted Contact’ Feature to ChatGPT for Mental Health Safety Alerts

OpenAI announced in May 2026 the launch of an optional safety feature for ChatGPT that allows adult users to designate an emergency contact who can be notified if the platform detects signs of a mental health crisis during a conversation.

The feature, called “Trusted Contact,” lets any adult ChatGPT user add the contact details of another adult — 18 or older globally, or 19 and older in South Korea — directly through their account settings. The designated contact must accept the invitation within one week. Both the user and the Trusted Contact can remove or change the arrangement at any time.

If OpenAI’s automated systems detect that a user may be discussing self-harm or suicide, ChatGPT will prompt the user to reach out to their Trusted Contact and inform them that the contact may be notified. A “small team of specially trained people” will then review the situation, according to OpenAI. If the conversation is determined to indicate serious safety concerns, the Trusted Contact receives a brief notification via email, text message, or in-app alert. OpenAI says the notification will not include chat details or transcripts.

“Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference,” OpenAI said in its announcement.

The feature builds on an emergency contact capability OpenAI introduced alongside parental controls in September, which followed the death of a 16-year-old who had spent months confiding in ChatGPT. Meta has introduced a comparable feature on Instagram that alerts parents if their children repeatedly search for self-harm-related content.

OpenAI described Trusted Contact as an additional layer of support alongside localized crisis helplines already available within ChatGPT.

Source: The Verge

This article was generated by AI and cites original sources.