• daannii@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    4
    ·
    22 hours ago

    Nope. We aren’t. Infact humans don’t work like that at all.

    It’s actually amazing we ever learned actual probability math since it goes against our nature.

    Here is an example.

    I flip a coin 10x. It lands on tails all 10x.

    I will believe that the next flip almost certainly has to be heads.

    Just has to be. Right ?

    Wrong.

    It has a 50/50 chance. Just like all the other flips.

    The previous flips have no impact on future flips. The coin does not remember.

    Yet humans will believe the likelihood of a heads has increased with every tail flip.

    When it has not.

    • Grail@multiverse.soulism.net
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      17 hours ago

      Yeah, LLMs are prone to similar biases because they are also bad at conceptualising probability. They’re not calculators, they’re ANNs. They basically operate on dream-logic. Whatever makes sense to you in a dream, is likely to make sense to an LLM. That’s because dreams are a time when you have intelligence without consciousness, like an LLM. You’re super suggestible and you just go with whatever feels right based on your gut instincts. An LLM is a simulation of a person’s gut instincts.

      • daannii@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        Yeah the prefrontal cortex is offline when you dream so you don’t question anything.

        Your example is a pretty good analogy. I’m saving it. 👍

        • Grail@multiverse.soulism.net
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          8 hours ago

          Ever notice how you letters don’t work right in dreams? If you look at writing in a dream, you just know what it means without having to analyse the letters. But if you try to study the letters, they swim around like a stable diffusion image. LLMs deal in tokens, not letters. The approximate meaning of each token is learned during the training phase, so the LLM has a gut feeling of how the token should be used. But it doesn’t know how to spell the token, which is why they can’t tell you how many Rs are in the word Strawberry. Asking an LLM about spelling is like asking a dreamer about spelling. There’s no spelling in dreams, just raw meaning.