• realitista@lemmus.org
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 day ago

    LLM’s have provided me pretty good info where a Google search didn’t, but there’s always that concern that the info isn’t right.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      23 hours ago

      It’s great if it is info you can immediately verify though, like whether it made up a function name or command line argument, or questions like “where are the files for _____ stored on my os”

    • trashcroissant@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      7
      ·
      1 day ago

      You can specify for some of them that it provide you with a confidence rating and ask for a source a lot of the time too, and I always recommend verifying on important stuff. If you’re just troubleshooting dumb/basic stuff it’s better than reading through an enshittified SEO website and pretty low risk.

      I’m not an advocate for them for many reasons but at my work they’re actually doing a decent job of teaching us how to use them helpfully (and not in a way that replaces what our job is).

      • realitista@lemmus.org
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        Yeah I find them quite useful for “explain to me how this works” kind of stuff whereas for “how do I do this” kind of stuff I try to find a primary source to verify against just to be sure.

        • Elvith Ma'for@feddit.org
          link
          fedilink
          arrow-up
          6
          ·
          1 day ago

          I usually try to iterate - read available documentation (e.g. comments in a config file, product documentation,…) and try to find stuff out. If I get stuck, an LLM answer may be confidently wrong, but it may give me some new pointers in which direction I should go next. Or maybe mention some buzzwords/techniques/concepts that I might need to investigate further.

          As it’s underlying concept is pattern recognition it might not be completely correct, but more often than not nudges me generally in the right direction. Bonus: Now I probably learned some things that will help me later on.

          So far I never had something a little more complex that an LLM gave me a correct solution for. But as I like to tinker, explore and learn for myself, I’d probably hate getting a complete working solution without any work I did myself.