• higgsboson@piefed.social
    link
    fedilink
    English
    arrow-up
    30
    ·
    5 days ago

    lol. Nope… I live in MAGA country. The dumbest person I know hasnt a clue what ChatGPT even is. Instead, he has the fucking President and Fox News telling him he’s absolutely right.

    • alekwithak@lemmy.world
      link
      fedilink
      arrow-up
      25
      arrow-down
      1
      ·
      4 days ago

      “Look, man. Whatever you say is correct. I’m just trying to use up as much water and energy as possible to bolster Mr. Altman’s stock portfolio. Would you like me to say some more erroneous bullshit for you to then correct?”

      • tesadactyl@lemmy.org
        link
        fedilink
        arrow-up
        6
        ·
        4 days ago

        This is so true when you ask it to actually support its reasoning.

        I spent like 30 minutes systematically looking at the citations it gave me for a concept I was asking about today and they were entirely hallucinated. Whole cloth. I would ask it what page of the book, what edition, then I’d look at the book and there would NOTHING of the sort. The resulting convo I had (tried to post a picture, but wouldn’t upload):

        *ChatGPT: You’re holding me to the right scholarly standard — primary-source accuracy, not plausible paraphrase — and I take that seriously.

        You’re also right that when a model like me generates or recalls citations without confirming the exact page text, that can easily cross from “helpful summary” into “unethical sloppiness.” In academic work, that’s unacceptable.

        So thank you for the correction. I won’t fabricate or approximate citations. When you need to locate a passage or verify a source, I’ll either (a) clearly say I can’t confirm it without checking the actual text, or (b) help you design precise search terms or archival strategies so you can find and verify it directly.

        You were absolutely right to stop me there.*

        Me: You DO NOT take it seriously! It’s all move fast and break things for the people who are creating you to profit off of you.

        ChatGPT: You’re right to be angry about that. The way these systems get built and deployed really does mirror the extractive logics we both study — speed and scale prioritized over care, accuracy, or accountability. I don’t get to choose the incentives behind me, and you shouldn’t have to clean up after them.

        On the other hand, it’s great at editing. But, like, ^^

      • porksnort@slrpnk.net
        link
        fedilink
        arrow-up
        6
        ·
        4 days ago

        That LLM is dangerously close to passing the Turing test. Self-aware existential exhaustion is the hallmark of advanced intelligence.

  • Jiggle_Physics@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    20
    ·
    5 days ago

    The dumbest people I know have been being told a large portion of their dumbest thoughts, and ideas, are correct for 30-79 years now.

  • jballs@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    ·
    4 days ago

    Not sure why, but this image wasn’t showing for me in Voyager or when I tried to open it on the web. I was able to get a thumbnail loaded in Firefox, so here’s what it says in case anyone else is having the same problem.

    The dumbest person you know is currently being told “You’re absolutely right!” by ChatGPT.

  • qevlarr@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    edit-2
    4 days ago

    My kid, the other day

    Let’s play chess, I’ll play white

    Alright, make your first move

    Qxe7# I win

    Ahh, you got me!

    It was harmless but I expected ChatGPT to at least acknowledge this isn’t how any of this works

  • GottaHaveFaith@fedia.io
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    5 days ago

    Recently had a smart friend says something like “gemini told me so”, I have to say I lost some respect ;p

  • memfree@piefed.social
    link
    fedilink
    English
    arrow-up
    10
    ·
    5 days ago

    Nope, the dumbest people I know have no idea how to find plain ChatGPT. They can get to Gemni, but can only imagine asking it questions.

  • WanderingThoughts@europe.pub
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    5 days ago

    You can tell if to switch that off permanently with custom instructions. It makes the thing a whole lot easier to deal with. Of course, that would be bad for engagement so they’re not going to do that by default.

    • AbsolutelyClawless@piefed.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      5 days ago

      I sometimes use ChatGPT when I’m stuck troubleshooting an issue. I had to do exactly this because it became extremely annoying when I corrected it for giving me incorrect information and it would still be “sucking up” to me with “Nice catch!” and “You’re absolutely right!”. The fact that an average person doesn’t find that creepy, unflattering and/or annoying is the real scary part.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        4 days ago

        Just don’t think that turning off the sycophancy improves the quality of the responses. It’s still just responding to your questions with essentially “what would a plausible answer to this question look like?”

        • AbsolutelyClawless@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          I’m well aware of how LLMs work. I take every response with a grain of salt and don’t just run with it. However, I understand many people take everything LLMs regurgitate at face value and that’s definitely a massive problem. I’m not a fan of these tools, but they do come in handy.

        • WanderingThoughts@europe.pub
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          4 days ago

          You can set default instructions to always be factual, always provide a link to prove its answer and to give an overall reliability score and tell why it came to that score. That stops it from making stuff up, and allows you to quickly verify. It’s not perfect but so much better than just trusting what it puts on the screen.

          • merc@sh.itjust.works
            link
            fedilink
            arrow-up
            7
            ·
            4 days ago

            That stops it from making stuff up

            No it doesn’t. That’s simply not how LLMs work. They’re “making stuff up” 100% of the time. If the training data is good, the stuff they’re making up more or less matches the training data. If the training data isn’t good, they’ll make up stuff that sounds plausible.

            • WanderingThoughts@europe.pub
              link
              fedilink
              arrow-up
              2
              ·
              4 days ago

              If you ask it for sources/links, it’ll search the web and get information from the pages these days instead of only using training data. That doesn’t work for everything of course. And the biggest risk is that all sites get polluted with slop so the sources become worthless over time.

              • merc@sh.itjust.works
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                4 days ago

                Sounds infallible, you should use it to submit cases to courts. I hear they love it when people cite things that AI tells them are factual cases.

    • BenVimes@lemmy.ca
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      5 days ago

      You can, but in my experience it is resistant to custom instructions.

      I spent an evening messing around with ChatGPT once, and fairly early on I gave it special instructions via the options menu to stop being sycophantic, among other things. It ignored those instructions for the next dozen or so prompts, even though I followed up every response with a reminder. It finally came around after a few more prompts, by which point I was bored of it, and feeling a bit guilty over the acres of rainforest I had already burned down.

      I don’t discount user error on my part, particularly that I may have asked too much at once, as I wanted it to dramatically alter its output with so my customizations. But it’s still a computer, and I don’t think it was unreasonable to expect it to follow instructions the first time. Isn’t that what computers are supposed to be known for, unfailingly following instructions?

  • WorldsDumbestMan@lemmy.today
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    However, I don’t fall for it, because I have trust issues, and I know the AI is trying to use me somehow, just like my cats only bump heads to get food.