Your ML model cache volume is getting blown up during restart and the model is being re-downloaded during the first search post-restart. Either set it to a path somewhere on your storage, or ensure you’re not blowing up the dynamic volume upon restart.

In my case I changed this:

  immich-machine-learning:
    ...
    volumes:
      - model-cache:/cache

To that:

  immich-machine-learning:
    ...
    volumes:
      - ./cache:/cache

I no longer have to wait uncomfortably long when I’m trying to show off Smart Search to a friend, or just need a meme pronto.

That’ll be all.

  • Avid Amoeba@lemmy.caOP
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    23 hours ago

    Oh, and if you haven’t changed from the default ML model, please do. The results are phenomenal. The default is nice but only really needed on really low power hardware. If you have a notebook/desktop class CPU and/or GPU with 6GB+ of RAM, you should try a larger model. I used the best model they have and it consumes around 4GB VRAM.

    • apprehensively_human@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      ·
      23 hours ago

      Which model would you recommend? I just switched from ViT-B/32 to ViT-SO400M-16-SigLIP2-384__webli since it seemed to be the most popular.

      • Avid Amoeba@lemmy.caOP
        link
        fedilink
        English
        arrow-up
        9
        ·
        23 hours ago

        I switched to the same model. It’s absolutely spectacular. The only extra thing I did was to increase the concurrent job count for Smart Search and to give the model access to my GPU which sped up the initial scan at least an order of magnitude.

        • apprehensively_human@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          Seems to work really well. I can do obscure searches like Outer Wilds and it will pull up pictures I took from my phone of random gameplay movements, so it’s not doing any filename or metadata cheating there.

    • Showroom7561@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      23 hours ago

      Is this something that would be recommended if self-hosting off a Synology 920+ NAS?

      My NAS does have extra ram to spare because I upgraded it, and has NVME cache 🤗

      • Avid Amoeba@lemmy.caOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        20 hours ago

        That’s a Celeron right? I’d try a better AI model. Check this page for the list. You could try the heaviest one. It’ll take a long time to process your library but inference is faster. I don’t know how much faster it is. Maybe it would be fast enough to be usable. If not usable, choose a lighter model. There’s execution times in the table that I assume tell us how heavy the models are. Once you change a model, you have to let it rescan the library.

        • Showroom7561@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          20 hours ago

          That’s a Celeron right?

          Yup, the Intel J4125 Celeron 4-Core CPU, 2.0-2.7Ghz.

          I switched to the ViT-SO400M-16-SigLIP2-384__webli model, same as what you use. I don’t worry about processing time, but it looks like a more capable model, and I really only use immich for contextual search anyway, so that might be a nice upgrade.

          • iturnedintoanewt@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            17 hours ago

            What’s your consideration for choosing this one? I would have thought ViT-B-16-SigLIP2__webli to be slightly more accurate, with faster response and all that while keeping a slightly less RAM consumption (1.4GB less I think).

            • Showroom7561@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              15 hours ago

              Seemed to be the most popular. LOL The smart search job hasn’t been running for long, so I’ll check that other one out and see how it compares. If it looks better, I can easily use that.

              • Avid Amoeba@lemmy.caOP
                link
                fedilink
                English
                arrow-up
                1
                ·
                13 hours ago

                Let me know how inference goes. I might recommend that to a friend with a similar CPU.

                • Showroom7561@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  4 hours ago

                  I decided on the ViT-B-16-SigLIP2__webli model, so switched to that last night. I also needed to update my server to the latest version of Immich, so a new smart search job was run late last night.

                  Out of 140,000+ photos/videos, it’s down to 104,000 and I have it set to 6 concurrent tasks.

                  I don’t mind it processing for 24h. I believe when I first set immich up, the smart search took many days. I’m still able to use the app and website to navigate and search without any delays.

                  • Avid Amoeba@lemmy.caOP
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    2 hours ago

                    Let me know how the search performs once it’s done. Speed of search, subjective quality, etc.