You’re probably already aware of this, but if you run Docker on linux and use ufw or firewalld - it will bypass all your firewall rules. It doesn’t matter what your defaults are or how strict you are about opening ports; Docker has free reign to send and receive from the host as it pleases.
If you are good at manipulating iptables there is a way around this, but it also affects outgoing traffic and could interfere with the bridge. Unless you’re a pointy head with a fetish for iptables this will be a world of pain, so isn’t really a solution.
There is a tool called ufw-docker that mitigates this by manipulating iptables for you. I was happy with this as a solution and it used to work well on my rig, but for some unknown reason its no-longer working and Docker is back to doing its own thing.
Am I missing an obvious solution here?
It seems odd for a popular tool like Docker - that is also used by enterprise - not to have a pain-free way around this.


I just host everything on bare metal and use systemd to lock down/containerize things as necessary, even adding my own custom drop-ins for software that ships its own systemd service file. SystemD is way more powerful than people often realize.
When you say you’re using systems to lock down/containerize things as necessary can you explain what you mean?
Systemd has all sorts of options. If a service has certain sandbox settings applied such as private /tmp, private /proc, restricting access to certain folders or devices, restricting available system calls or whatever, then systemd creates a chroot in /proc/PID for that process with all your settings applied and the process runs inside that chroot.
I’ve found it a little easier than managing a full blown container or VM, at least for the things I host for myself.
If a piece of software provides its own service file that isn’t as restricted as you’d like, you can use systemctl edit to add additional options of your choosing to a “drop-in” file that gets loaded and applied at runtime so you don’t have to worry about a package update overwriting any changes you make.
And you can even get ideas for settings to apply to a service to increase security with:
systemd-analyze security SERVICENAME
I don’t know what the commenter you replied to is talking about, but systemd has it’s own firewalling and sandboxing capabilities. They probably mean that they don’t use docker for deployment of services at all.
Here is a blogpost about systemd’s firewall capabilities: https://www.ctrl.blog/entry/systemd-application-firewall.html
Here is a blogpost about systemd’s sandboxing: https://www.redhat.com/en/blog/mastering-systemd
Here is the archwiki’s docs about drop in units: https://wiki.archlinux.org/title/Systemd#Drop-in_files
I can understand why someone would like this, but this seems like a lot to learn and configure, whereas podman/docker deny most capabilities and network permissions by default.
Containers run “on bare metal” just as much as non-containerized applications.