• kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate.

    But that’s not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is “companies will buy our products so they can do more with less.” It’s not “business custom­ers will buy our products so their products will cost more to make, but will be of higher quality.”

    Cory Doctorow: What Kind of Bubble is AI?

    • dance_ninja@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      AI tools like this should really be viewed as a calculator. Helpful for speeding up analysis, but you still require an expert to sign off.

    • metaStatic@kbin.earth
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      plenty of people can’t reason either. the current state of AI is closer to us than we’d like to admit.

      • Syrc@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        9 months ago

        That’s just false. People are all capable of reasoning, it’s just that plenty of them get terribly wrong conclusions from doing that, often because they’re not “good” at reasoning. But they’re still able to do that, unlike AI (at least for now).

    • NaibofTabr@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      I mean… duh? The purpose of an LLM is to map words to meanings… to derive what a human intends from what they say. That’s it. That’s all.

      It’s not a logic tool or a fact regurgitator. It’s a context interpretation engine.

      The real flaw is that people expect that because it can sometimes (more than past attempts) understand what you mean, it is capable of reasoning.

      • vithigar@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        Not even that. LLMs have no concept of meaning or understanding. What they do in essence is space filling based on previously trained patterns.

        Like showing a bunch of shapes to someone, then drawing a few lines and asking them to complete the shape. And all the shapes are lamp posts but you haven’t told them that and they have no idea what a lamp post is. They will just produce results like the shapes you’ve shown them, which generally end up looking like lamp posts.

        Except the “shape” in this case is a sentence or poem or self insert erotic fan fiction, none of which an LLM “understands”, it just matches the shape of what’s been written so far with previous patterns and extrapolates.

        • NaibofTabr@infosec.pub
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          Well yes… I think that’s essentially what I’m saying.

          It’s debatable whether our own brains really operate any differently. For instance, if I say the word “lamppost”, your brain determines the meaning of that word based on the context of my other words around “lamppost” and also all of your past experiences that are connected with that word - because we use past context to interpret present experience.

          In an abstract, nontechnical way, training a machine learning model on a corpus of data is sort of like trying to give it enough context to interpret new inputs in an expected/useful way. In the case of LLMs, it’s an attempt to link the use of words and phrases with contextual meanings so that a computer system can interact with natural human language (rather than specifically prepared and formatted language like programming).

          It’s all just statistics though. The interpretation is based on ingestion of lots of contextual uses. It can’t really understand… it has nothing to understand with. All it can do is associate newly input words with generalized contextual meanings based on probabilities.

          • MutilationWave@lemmy.world
            link
            fedilink
            arrow-up
            0
            arrow-down
            1
            ·
            9 months ago

            I wish you’d talked more about how we humans work. We are at the mercy of pattern recognition. Even when we try not to be.

            When “you” decide to pick up an apple it’s about to be in your hand by the time your software has caught up with the hardware. Then your brain tells “you” a story about why you picked up the apple.

  • Allonzee@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    9 months ago

    They just want to make an economy they don’t have to pay anyone to profit from. That’s why slavery became Jim Crow became migrant labor and with modernity came work visa servitude to exploit high skilled laborers.

    The owners will make sure they always have concierge service with human beings as part of upgraded service, like they do now with concierge medicine. They don’t personally suffer approvals for care. They profit from denying their livestock’s care.

    Meanwhile we, their capital battery livestock property, will be yelling at robots about refilling our prescription as they hallucinate and start singing happy birthday to us.

    We could fight back, but that would require fighting the right war against the right people and not letting them distract us with subordinate culture battles against one another. Those are booby traps laid between us and them by them.

    Only one man, a traitor to his own class no less, has dealt them so much as a glancing blow, while we battle one another about one of the dozens of social wedges the owners stoke through their for profit megaphones. “Women hate men! Christians hate atheists! Poor hate more poor! Terfs hate trans! Color hate color! 2nd Gen immigrants hate 1st Gen immigrants!” On and on and on and on as we ALL suffer less housing, less food, less basic needs being met. Stop it. Common enemy. Meaningful Shareholders.

    And if you think your little 401k makes you a meaningful shareholder, please just go sit down and have a juice box, the situation is beyond you and you either can’t or refuse to understand it.