I’m currently running my Immich server on a mini PC with proxmox

It’s got 3x N97 CPU cores available to it and 7gb of ram It’s using the default ViT-B-32__openai model, I was wondering if I can use a more powerful model, but I’m not sure which one or if I should enable hardware acceleration etc.

This is my yaml file

  immich-machine-learning:  
    container_name: immich_machine_learning  
    # For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag.  
    # Example tag: ${IMMICH_VERSION:-release}-cuda  
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}  
    # extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration  
    #   file: hwaccel.ml.yml  
    #   service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable  
    volumes:  
      - immich-model-cache:/cache  
    env_file:  
      - stack.env  
    restart: always  
    healthcheck:  
      disable: false  

I looked at the docs but it’s a bit confusing so that’s why I’m here.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 days ago

    According to this paste, you’re not even using inference at all, or rather, it’s using CPU.

    Change the release tag and “cpu” to openvino and see if that performs any better.