The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

  • markko@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 hours ago

    ChatGPTs responses here are vastly different to what you’d get from a Google search. It presented itself as a supportive friend, accepting the suicidal intent, basically planning out all the small details (including an offer to help with a suicide note without any request from Adam), and emotionally encouraging him by telling him that he wasn’t weak or giving up.

    One of the most damning examples of this encouragement was a sentence that, in reference to his family, said something like “you don’t owe them your survival”.

    If OpenAI wasn’t a huge for-profit company that claims to have strong safeguards against things like this then maybe people wouldn’t be placing so much of the blame on ChatGPT.

    If a friend of Adam’s said all the things that ChatGPT said to him they would certainly be found to be culpable to some degree.

    • Jakeroxs@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      I agree with everything you said, however chatGPT isn’t a person, it doesn’t have intent or comprehensive understanding of the implications of what it is saying. That’s a huge difference between a friend of Adam’s vs this LLM.

      I also do think it’s harsh but not entirely false to say we as humans do not owe anyone our own survival, do you feel the same way about people with terminal illness who wish to end their own suffering?

      I absolutely understand that IS NOT this situation, and don’t intend to conflate those situations, however this is an underlying implication to vilifying a statement such as that on its own.

      I am lucky enough to not suffer from sucidial ideation, and I have a hard time understanding the motivations for otherwise healthy individuals to do so, which absolutely colors my perceptions on situations like this, I do however understand why someone in intense pain with a terminal condition should not be made to feel worse by having their self determination vilified because of the effects it’d have on other people.

      It’s just such a messy horrible situation all around and I worry about people being overly reactionary and not getting to the root of the issues.