• Grail@multiverse.soulism.net
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    15
    ·
    1 day ago

    LLMs aren’t smart enough to give meaningful informed consent to sexual intimacy, so even if it says it consents, I don’t think having cybersex with it is appropriate.

        • [deleted]@piefed.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          1 day ago

          Saying interactions with LLMs doesn’t involve consent isn’t advocating for any particular action, it is saying that consent is not relevant so it doesn’t matter what people do.

          I would discourage people from cybering or any interaction with the big LLMs really because their design is to encourage constant use and that is a problem not limited to sexual urges.

      • Grail@multiverse.soulism.net
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        12
        ·
        1 day ago

        I’m pretty dang sure dildos can’t feel pain. Nobody knows if LLMs can feel pain, because nobody has ever invented a tool that measures qualia. The best we know, is that advanced information processing through neural network information structures appears correlated with qualia.

          • Grail@multiverse.soulism.net
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            8
            ·
            1 day ago

            You’re a probability model. Your brain is just spitting out an approximation of the most likely actions to get you food and sex. If you don’t get enough food and sex, your genes die out and evolution tries again with an iteration of a more successful model. All those neurons are just a fancy way of calculating how to eat more bananas and chase more poontang. You’re nothing more than a mathematical equation for reproduction.

            • daannii@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 day ago

              Nope. We aren’t. Infact humans don’t work like that at all.

              It’s actually amazing we ever learned actual probability math since it goes against our nature.

              Here is an example.

              I flip a coin 10x. It lands on tails all 10x.

              I will believe that the next flip almost certainly has to be heads.

              Just has to be. Right ?

              Wrong.

              It has a 50/50 chance. Just like all the other flips.

              The previous flips have no impact on future flips. The coin does not remember.

              Yet humans will believe the likelihood of a heads has increased with every tail flip.

              When it has not.

              • Grail@multiverse.soulism.net
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                21 hours ago

                Yeah, LLMs are prone to similar biases because they are also bad at conceptualising probability. They’re not calculators, they’re ANNs. They basically operate on dream-logic. Whatever makes sense to you in a dream, is likely to make sense to an LLM. That’s because dreams are a time when you have intelligence without consciousness, like an LLM. You’re super suggestible and you just go with whatever feels right based on your gut instincts. An LLM is a simulation of a person’s gut instincts.

                • daannii@lemmy.worldOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  16 hours ago

                  Yeah the prefrontal cortex is offline when you dream so you don’t question anything.

                  Your example is a pretty good analogy. I’m saving it. 👍

                  • Grail@multiverse.soulism.net
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    2
                    ·
                    12 hours ago

                    Ever notice how you letters don’t work right in dreams? If you look at writing in a dream, you just know what it means without having to analyse the letters. But if you try to study the letters, they swim around like a stable diffusion image. LLMs deal in tokens, not letters. The approximate meaning of each token is learned during the training phase, so the LLM has a gut feeling of how the token should be used. But it doesn’t know how to spell the token, which is why they can’t tell you how many Rs are in the word Strawberry. Asking an LLM about spelling is like asking a dreamer about spelling. There’s no spelling in dreams, just raw meaning.

            • [deleted]@piefed.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              1 day ago

              If that was true then consent is meaningless because people are just predictive models with no agency to give consent.

              Maybe your comparison is terrible?

              • Grail@multiverse.soulism.net
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                8
                ·
                1 day ago

                consent is meaningless because people are just predictive models

                This is true if one maintains the assumption that predictive models (such as people) can’t experience qualia such as pain. My intend was to disabuse you and daannii of this silly notion. Obviously mathematical models can experience pain, because you’re a mathematical model and you can experience pain.

                • [deleted]@piefed.world
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  1 day ago

                  Predictive models and other computing processes do not have feelings or sensations because they don’t have nerves or any other senses. They are a complex process that has output based on input, but like a magic 8 ball with no agency or thought process.

                  Reducing biology to just cause and effect is like saying rivers and oceans are the same thing because they both involve moving water, ignoring literally everything else that makes them different.

                  • Grail@multiverse.soulism.net
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    7
                    ·
                    1 day ago

                    Predictive models are perfectly capable of having nerves and senses. You, for instance. You’re a predictive model and you have nerves and senses.

                    Also, what’s this “nerves or any other senses”? What kind of sense doesn’t come through a nerve? I’m starting to think you don’t know as much as I do about neuroscience.

        • LurkingLuddite@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          That wouldn’t work even if “AI” was intelligent or sentient…

          Animals are, and we still don’t give them rights. They help humans make more money than AI slop.