Teen trusted ChatGPT to help him “safely” experiment with drugs, logs show.

Most troublingly, as Nelson became increasingly interested in combining drugs, ChatGPT repeatedly warned him that mixing certain drugs could be a “respiratory arrest risk.” Shortly before recommending the deadly mix that killed Nelson, the chatbot also showed that it understood combining drugs like Kratom and Xanax with alcohol. In one output, ChatGPT explained that mix is “how people stop breathing.” But that knowledge didn’t block ChatGPT from eventually recommending that Nelson take such a deadly mix.

  • makeshift0546@lemmy.today
    link
    fedilink
    arrow-up
    4
    arrow-down
    40
    ·
    9 hours ago

    Let’s just kill all search 🤷‍♂️

    Y’all are desperate to frame AI as some machine trying to kill you.

    • PumaStoleMyBluff@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      3 hours ago

      These companies are spending trillions of dollars to get actual hospitals to replace actual doctors with this shit, claiming it actually is capable of helping and replacing medical professionals. That’s not framing, that’s literally what’s happening.

    • biggerbogboy@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      4 hours ago

      The danger with LLMs isn’t that it “tries to kill you”, it’s because they’re all sycophantic, it isn’t a fully understood technology yet (so safeguards inside the black box will only be known to go so far, with an unknown amount of ways to bypass,) and humanity is generally susceptible to being manipulated to trust LLMs (due to how they sound the same in all topics, and dont have other modes of communication other than text and voice, among other issues.)

      What everyone is mainly saying is that OpenAI has a long history of assisting in dozens of deaths, more than other companies like Meta and Anthropic. Despite the fact that there will always be a non-zero chance of bypassing filters, OpenAI has continuously mismanaged creating these filters in the first place.

    • quarkquasar@lemmy.world
      link
      fedilink
      arrow-up
      18
      ·
      8 hours ago

      There was an AI that talked a kid into killing himself and telling him good job afterwards, you can play ignorant up until the slaughterbots are upon you.

      • makeshift0546@lemmy.today
        link
        fedilink
        arrow-up
        1
        arrow-down
        22
        ·
        8 hours ago

        Right, one person or small group with mental illness found a way to break safe guards so the tech is dangerous.

        While we’re at it, let’s ban video games. A few people died in cafes from addiction, it has absolutely caused heart attacks and fatties, and has often been used to turn normal teens into powderkegs just waiting to shoot everyone up.

        I’ve heard social media does harm too, so what the fuck are you doing here!!! You could hurt someone!

        • quarkquasar@lemmy.world
          link
          fedilink
          arrow-up
          8
          ·
          5 hours ago

          Nah, I’ve got morals and ethics and a conscious that keep me from doing bad things, something no machine is anywhere close to possessing.