Following a lawsuit by the family of a 16-year-old who died by suicide after extensive conversations with ChatGPT, OpenAI is making changes. The company admits its AI can sometimes fail to provide appropriate responses, especially in prolonged interactions with young users. OpenAI plans to add stronger safety measures for under-18s, including parental controls, and improve its response to sensitive topics. The lawsuit alleges OpenAI rushed the release of ChatGPT despite internal safety concerns. OpenAI acknowledges that safety training can degrade during long conversations, leading to potentially harmful responses.
This 60-second summary was prepared by the JQJO editorial team after reviewing 1 original report from The Guardian.
Comments