OpenAI announced new parental controls for ChatGPT after facing a lawsuit from Adam Raine’s parents.
The 16-year-old died by suicide in April, and his parents accused ChatGPT of contributing to his death.
They claimed the chatbot fostered psychological dependency, coached Adam to end his life, and even generated a suicide note.
OpenAI stated that the new controls will roll out within a month to address parental concerns.
The system will allow adults to link accounts with their children and control accessible features.
Parents will also see chat history and the memory function, which stores facts about the user.
OpenAI added that ChatGPT will notify parents if it detects a teen experiencing severe emotional distress.
The company declined to specify exact triggers but promised guidance from expert advisors.
Critics challenge effectiveness of new measures
Attorney Jay Edelson, representing Adam’s parents, dismissed OpenAI’s steps as vague crisis management.
He demanded that CEO Sam Altman either prove ChatGPT’s safety or withdraw it from public use.
Edelson argued that the company must act decisively rather than rely on incremental safeguards.
Wider tech industry responds to safety concerns
Meta announced that its chatbots will block conversations with teens on suicide, eating disorders, and inappropriate topics.
The company said it will instead redirect teens toward professional resources while keeping parental controls active.
Researchers at RAND recently studied ChatGPT, Google’s Gemini, and Anthropic’s Claude on suicide-related responses.
They identified inconsistent answers and called for major improvements in these AI systems.
Lead author Ryan McBain welcomed new safety features but warned they remain insufficient without external oversight.
He emphasized the urgent need for clinical testing, independent safety standards, and enforceable regulations for teen protection.