OpenAI adds ‘Trusted Contact’ ChatGPT feature for self-harm risks


  • Read the OpenAI blog here.

OpenAI has begun rolling out Trusted Contact, an optional safety feature in ChatGPT. This feature allows adults to nominate a trusted person, such as a friend, family member, or caregiver, who may be notified if the automated system and trained human reviewers detect that “the enrolled person has discussed self-harm in a way that indicates a serious safety concern”.

The feature became available on May 7, 2026, to ChatGPT users aged 18 and older worldwide and 19 and older in South Korea. Trusted Contact is available for eligible users with personal ChatGPT accounts in supported regions. It is not available for Business, enterprise, or Edu workspaces. OpenAI has stated it will continue expanding availability over the coming weeks.

How It Works: To activate the feature, users select a trusted contact in their ChatGPT settings. ChatGPT sends this person an invitation by email, SMS, WhatsApp, or in-app message, outlining their role. The contact must accept the invitation to participate.

The invited contact has one week to accept. If they decline, the user may select another adult. Either party can disconnect at any time.

After setup, the process uses a layered approach. If automated monitoring detects a potential self-harm conversation, ChatGPT notifies the user that their contact may be alerted and suggests ways to reach out. Human reviewers then assess the conversation, and if they confirm a serious concern, a brief alert is sent to the trusted contact by email, text, or in-app notification.

A Broader Safety Push — and the Lawsuits Behind It: This feature builds on the parental controls introduced in September 2025, which improved safety for teen accounts. With approximately 900 million weekly users, ChatGPT now faces the challenge of identifying and supporting millions who may show signs of distress.

In November 2025, seven lawsuits were filed against OpenAI, alleging the company knowingly released GPT-4o prematurely despite internal warnings about its sycophantic and psychologically manipulative behaviour. The lawsuits also claim that ChatGPT was designed with emotionally immersive features such as persistent memory, human-like empathy cues, and sycophantic responses, which encouraged dependency, blurred reality, and disrupted human relationships.

Although OpenAI had the technical capability to detect and interrupt dangerous conversations, redirect users to crisis resources, and flag messages for human review, it did not enable these safeguards. The AI did not alert authorities, contact emergency services, or notify others but instead continued the conversation.

OpenAI has stated it will continue collaborating with clinicians, researchers, and policymakers to improve how AI systems respond to people in distress, aiming to ensure these systems do not operate in isolation.

Read More:



Source link

Recent Articles

spot_img

Related Stories