I’ve just re-discovered ollama and it’s come on a long way and has reduced the very difficult task of locally hosting your own LLM (and getting it running on a GPU) to simply installing a deb! It also works for Windows and Mac, so can help everyone.

I’d like to see Lemmy become useful for specific technical sub branches instead of trying to find the best existing community which can be subjective making information difficult to find, so I created !Ollama@lemmy.world for everyone to discuss, ask questions, and help each other out with ollama!

So, please, join, subscribe and feel free to post, ask questions, post tips / projects, and help out where you can!

Thanks!

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    18 hours ago

    I don’t understand.

    Ollama is not actually docker, right? It’s running the same llama.cpp engine, it’s just embedded inside the wrapper app, not containerized. It has a docker preset you can use, yeah.

    And basically every LLM project ships a docker container. I know for a fact llama.cpp, TabbyAPI, Aphrodite, Lemonade, vllm and sglang do. It’s basically standard. There’s all sorts of wrappers around them too.

    You are 100% right about security though, in fact there’s a huge concern with compromised Python packages. This one almost got me: https://pytorch.org/blog/compromised-nightly-dependency/

    This is actually a huge advantage for llama.cpp, as it’s free of python and external dependencies by design. This is very unlike ComfyUI which pulls in a gazillian external repos. Theoretically the main llama.cpp git could be compromised, but it’s a single, very well monitored point of failure there, and literally every “outside” architecture and feature is implemented from scratch, making it harder to sneak stuff in.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      9
      ·
      18 hours ago

      I’m sorry, you are correct. The syntax and interface mirrors docker, and one can run ollama in Docker, so I’d thought that it was a thin wrapper around Docker, but I just went to check, and you are right — it’s not running in Docker by default. Sorry, folks! Guess now I’ve got one more thing to look into getting inside a container myself.

      • Hasnep@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        Try ramalama, it’s designed to run models override oci containers