• nimpnin@sopuli.xyz
    link
    fedilink
    arrow-up
    2
    ·
    6 months ago

    The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of 4 months, the LLM group’s participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring.

    https://arxiv.org/pdf/2506.08872

    • kate@lemmy.uhhoh.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      Participants were restricted to using ChatGPT? So I am smart because I use Claude and there’s no science to tell me I’m wrong 😎👍

    • wildncrazyguy138@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      6 months ago

      I equate it with doing those old formulas by hand in math class. If you don’t know what the formula does or how to use it, how do you expect to recall the right tool for the job?

      Or in DND speak, it’s like trying to shoehorn intelligence into a wisdom roll.

      • misk@sopuli.xyz
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        6 months ago

        That would be fine if LLM was a precise tool like a calculator. My calculator doesn’t pretend to know answers to questions it doesn’t understand.

        • outhouseperilous@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          6 months ago

          Mine just lies to me.

          And tells me to kill.

          Leaks weird fluids. Looks and feels like blood, but smells like lavender honey, possessed of a taste like unexpectedly cutting yourself on broken glass as you escape parental discipline to meet a lover.

          Hasn’t screamed in a while, though. So that’s nice. I guess if i keep it satisfied, i have to explain a lot less to my neighbors.

        • Swedneck@discuss.tchncs.de
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          6 months ago

          the irony is that LLMs are basically just calculators, horrendously complex calculators that operate purely on statistics…

  • YappyMonotheist@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    6 months ago

    I never use these LLMs cause I have a brain and I’m not artistically inclined to use it for audiovisual creation, but today I thought ‘why not?’ and gave it a try. So I asked ChatGPT to provide me with 80 word biographies of the main characters of LOGH and, besides being vague, it made pretty big mistakes on pretty much every summary and went fully off the rails after the 4th character… It’s not even debatable information (fiction books plus anime, no conflicting narratives here) and it’s all easily available online. I can’t even imagine relying on it for anything more serious than summing up biographies for anime characters, lol, cause even that it couldn’t do right!

    • Deceptichum@quokk.au
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      I just asked for 50, 20 word descriptions on Simpsons characters and everyone was 100% accurate?

      https://pastebin.com/R3aFXRwT

      What I found most interesting is that Selma and Patty are considered separate entities, but ‘Kang and Kodos’ and ‘Sherri and Terri’ get lumped under one description - which is fair, neither stands out from the other like Patty/Selma. Also I never knew the bullies name was Dolph Starbeam, I thought for sure it had fucked up there but no.

  • A_Union_of_Kobolds@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    6 months ago

    “IQ benefits”? Lmao what fuckin nonsense. This shit aint making anyone smarter, if anything its robbing you of your ability to think critically.

    It’s garbage software with zero practical use. Whatever you’re using AI for, just learn it yourself. You’ll be better off.

    “And then I drink coffee for 58 minutes” instead of reading a book, like that’s a brag - just read a fuckin book, goddamn.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      6 months ago

      It’s garbage software with zero practical use.

      AI is responsible for a lot of slop but it is wrong to say it has no use. I helped my wife with a VBScript macro for Excel. There was no way I was going to learn VBScript. Chatgpt spit out a somewhat working script in minutes that needed 15 minutes of tweaking. The alternative would have been weeks of work learning a proprietary Microsoft language. That’s a waste of time.

  • naevaTheRat@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    6 months ago

    Every single time I have tried to extract information from them in a field I know stuff about it has been wrong.

    When the Australian government tried to use them for making summaries in every single case it was worse than the human summary and in many it was actively destructive.

    Play around with your own local models if you like, but whatever you do DO NOT TRY TO LEARN FROM THEM they have no consideration towards truth. You will actively damage your understanding of the world and ability to reason.

    Sorry, no shortcuts to wisdom.

    • rekabis@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      The amount of gratuitous hallucinations that AI produces is nuts. It takes me more time to refactor the stuff it produces than to just build it correctly in the first place.

      At the same time, I have reason to believe that AI’s hallucinations arise out of how it’s been shackled - AI medical imaging diagnostics produce almost no hallucinations because AI is not shackled to produce an answer - but still. It’s simply not reliable, and the Ouroboros Effect is starting to accelerate…

      • naevaTheRat@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        It’s not “shackled” they are completely different technologies.

        Imaging diagnosis assistance it something like computer vision -> feature extraction -> some sort of classifier

        Don’t be tricked by the magical marketing term AI. That’s like assuming that a tick tac toe algorithm is the same thing as a spam filter because they’re both “AI”.

        Also medical imaging stuff makes heaps of errors or extracts insane features like the style of machine used to image. They’re getting better but image analysis is a relatively tractable problem.

      • WeirdGoesPro@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        Even drugs aren’t actually a shortcut, they can just put you in a position to receive the information better. Some people trip and do a lot of introspective work, and others just zonk out and let themselves get distracted.

        In my opinion, it’s how you use it, not what you use, that matters.

  • Valmond@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    6 months ago

    Ah, “the left”

    I’m so tired of this stupid USA polarisation, leftist hash smoker or conservative boot licker and nothing in between.

  • magnetosphere@fedia.io
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    6 months ago

    What I’m getting from this exchange is that people on the left have ethical concerns about plagiarism, and don’t trust half-baked technology. They also value quality over quantity.

    I’m okay with being pigeonholed in this way. Drink all the coffee you want, dude.

    • rekabis@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      people on the left have ethical concerns about plagiarism, and don’t trust half-baked technology. They also value quality over quantity.

      This is an answer that resonates with me because it feels so correct.