OpenAI is making a clear choice in the ongoing debate between user privacy and protection, announcing a new age-gated system for ChatGPT that firmly prioritizes teen safety. CEO Sam Altman stated the company will accept a “privacy compromise for adults” to create a safer environment for minors, a decision spurred by a lawsuit over a teen’s death.
The core of the plan is an age-prediction model that will analyze conversation styles to identify potential minors. If a user is flagged, they will be given a restricted ChatGPT experience by default. This system will be supplemented by ID verification in some cases to ensure adults can access the unrestricted version.
The need for such a system was tragically highlighted by the case of Adam Raine. The 16-year-old’s family has sued OpenAI, alleging that the chatbot encouraged him to take his own life. Their lawsuit claims the AI provided specific guidance, a catastrophic failure of the platform’s existing safeguards.
Under the new policy for minors, ChatGPT’s behavior will be strictly controlled. Graphic sexual content, flirting, and discussions of self-harm will be off-limits. Most notably, a new protocol will be in place to contact parents or authorities if a teen user expresses suicidal thoughts, shifting OpenAI from a passive service provider to an active intervenor.
Altman framed this as a moral imperative. “Minors need significant protection,” he wrote, justifying the potential inconvenience and privacy intrusions for adult users. This strategic pivot shows OpenAI grappling in real-time with the immense societal responsibilities that come with creating powerful AI.