I have successfully passed through a GPU to a full VM for gaming, but since reverted that to a standalone installation. So I know passthrough is possible/I’m capable of implementing it.
That said, I’m trying to plan out some clustering across at least three machines, two with GPUs and only one of those has any real heft. My understanding is that, with most/normal consumer hardware, there is not an option to split GPU load across multiple containers or VMs; once passthrough is set up, it is dedicated to that instance.
I am wondering, is this true even if I orchestrated spin up/down of the instance? For instance, can LXC1 have the GPU until I shut it down, then spin up LXC2 or VM3 to take over that same GPU without reconfiguring and restarting the host? IIRC configuring the passthrough suggested this wasn’t possible but I’ll have to experiment to be sure, or rely on Lemmy’s expert opinion (-:
My assumption for now is that I just need to have a single guest per GPU (or buy a much more expensive card).
You can use vgpu if you have an older nvidia card, up to 2000 series https://gitlab.com/polloloco/vgpu-proxmox
I don’t think any other cards are nearly as well supported for GPU resource splitting
Second this. Works really well in a stable distro like Proxmox. Unfortunately however the community is only on discord. With some other patches linked there you can also use the gpu both on the host and split in vGPUs for virtual machines at the same time. I used it for some time on Arch Linux host + Win10 VM for CAD. Worked fine, but frequent arch updates borked everything often. On proxmox I never had such problems.
You can probably assign it to multiple VMs or containers, and if it’s not available then the VM or container will fail to start.
Some NVIDIA licenses will prohibit assigning it to more than one VM.
I did just find this quote on reddit:
A GPU can only be passed through to the a single VM at time though Proxmox can pass it through to multiple containers (LXC) but they can only run Linux instances.
I’ll have to look more into this but sounds promising
https://www.reddit.com/r/homelab/comments/18gu42z/comment/kd2vt5j/
It even sounds like this is handled on proxmox’s side, no need for iommu stuff
The biggest challenge I ran into is keeping the drivers in sync between the host and the LXC, since one is Deb and the other is Ubuntu, the LXC tends to want to update sooner and sometimes that can break the communication.
You edit the LXCs config in proxmox to do it, sec.
Edit: This guide would probably be better then what I did a earlier this year: https://www.virtualizationhowto.com/2025/05/how-to-enable-gpu-passthrough-to-lxc-containers-in-proxmox/