I’m planning out a proxmox box with an OPNsense VM for an upcoming build. I want to consolidate multiple little boxes into one more capable device.
I was planning on using a dual port NIC that I would passthru to the OPNsense VM. I like the idea of the WAN interface being piped directly to the VM rather than passing through the host and being presented as a virtual device. But that means BSD has to play nice with it and as I understand it, BSD network drivers can be temperamental and intel’s drivers are just better.
I was looking at using a cheap dual port intel 226v NIC for this, but intel’s not in a great place right now so I’d like to consider other options. Everywhere online, people scream “only use intel NICs for this” but I find it ridiculous that in 2025, nobody else has managed to make stable drivers for their hardware in this use case.
What are your experiences with non-intel NICs in OPNsense?
I just attached the host NIC to OPNSense and then have a vxlan in proxmox to make the VM network separate from the rest of my home network. Both the host NIC and the vxlan virtual NIC are attached to the VM.
The OPNsense VM acts as a router between the two networks. I host all my shit on the VM network under *.internal.legit.tld and use LetsEncrypt + Traefik to issue SSL certs which work without having to load a CA cert everywhere because I own legit.tld
The only bastard was having to adjust the MTU everywhere within the VM network, that caught me out a couple of times
Why did you choose Vxlan over regular vlans?
Are you running EVPN-Vxlan at all?
Why the MTU change?
Proxmox requires subtracting 50 from the MTU so it can store it’s vxlan information in the packet.
From the docs:
It’s super annoying but I couldn’t see another way of having vms be able to talk to each other transparently regardless of which node they are on
Ah, ok, good to know, thanks