Your ML model cache volume is getting blown up during restart and the model is being re-downloaded during the first search post-restart. Either set it to a path somewhere on your storage, or ensure you’re not blowing up the dynamic volume upon restart.
In my case I changed this:
immich-machine-learning:
...
volumes:
- model-cache:/cache
To that:
immich-machine-learning:
...
volumes:
- ./cache:/cache
I no longer have to wait uncomfortably long when I’m trying to show off Smart Search to a friend, or just need a meme pronto.
That’ll be all.
Doing a volume like the default Immich docker-compose uses should work fine, even through restarts. I’m not sure why your setup is blowing up the volume.
Normally volumes are only removed if there is no running container associated with it, and you manually run
docker volume prune
Because I clean everything up that’s not explicitly on disk on restart:
[Unit] Description=Immich in Docker After=docker.service Requires=docker.service [Service] TimeoutStartSec=0 WorkingDirectory=/opt/immich-docker ExecStartPre=-/usr/bin/docker compose kill --remove-orphans ExecStartPre=-/usr/bin/docker compose down --remove-orphans ExecStartPre=-/usr/bin/docker compose rm -f -s -v ExecStartPre=-/usr/bin/docker compose pull ExecStart=/usr/bin/docker compose up Restart=always RestartSec=30 [Install] WantedBy=multi-user.target
That’s wild! What advantage do you get from it, or is it just because you can for fun?
Also I’ve never seen a service created for each docker stack like that before…
Wow, you pull new images every time you boot up? Coming from a mindset of having rock solid stability, this scares me. You’re living your life on the edge my friend. I wish I could do that.
I use a fixed tag. 😂 It’s more a simple way to update. Change the tag in SaltStack, apply config, service is restarted, new tag is pulled. If the tag doesn’t change, the pull is a noop.
Ahh, calmed me down. Never thought of doing anything like you’re doing it here, but I do like it.
But why?
why not just down up normally and have a cleanup job on a schedule to get rid of any orphans?
I a world where we can’t really be sure what’s in an upgrade, a super-clean start that burns any ephemeral data is about the best way to ensure a consistent start.
And consistency gives reliability, as much as we can get without validation (validation is “compare to what’s correct”, but consistency is “try to repeat whatever it was”).