

If you can get their servers to connect to that IP under your control, you’ve earned it
If you can get their servers to connect to that IP under your control, you’ve earned it
This sounds like a whole lot of convoluted bullshit to use Plex locally and “looking local” through VPN solutions when you could just roll a Jellyfin instance and do things a more straightforward way…
can’t even do video playback on VLC.
I remember back in the day when I downloaded the first divx file my K6-400 couldn’t smoothly play… I had been so used to thinking of that as a powerhouse coming from my Pentium 60, which was the first one I ran Linux on.
You can use one of a few ways to use the TPM to auto decrypt on boot without passphrase. Systemd cryptenroll is my favorite.
Because it says to do so?
Proxmox uses Debian as the OS and for several scenarios it says do Debian to get that done and just add the proxmox software. It’s managing qemu kvm on a deb managed kernel
It depends on what you want to self host.
As an example, a family member self hosted home assistant. They didn’t have to know anything really. That was all they were doing and they bought the canned implementation.
If you have multiple services, you may need to know nginx configuration with virtual hosting.
You may want to use podman or docker or kubernetes.
It all depends …
Yeah, but it is hard to separate that, and it’s easy to get a bit resentful particularly when a projects quality declines in large part because they got lazy by duct taping in container registries instead of more carefully managing their project.
You’ve been downvoted, but I’ve seen a fair share of ZFS implementations confirm your assessment.
E.g. “Don’t use ZFS if you care about performance, especially on SSD” is a fairly common refrain in response to anyone asking about how to get the best performance out of their solution.
Actually, the lower level may likely be less efficient, due to being oblivious about the nature of the data.
For example, a traditional RAID1 mirror on creation immediately starts a rebuild across all the potential data capacity of the storage, without a single byte of actual data written. So you spend an entire drive wipe making “don’t care” bytes redundant.
Similarly, for snapshotting, it can only track dirty blocks. So you replace uninitialized data that means nothing with actual data, the snapshot layer is compelled to back up that unitiialized data, because it has no idea whether the blocks replaced were uninialized junk or real stuff.
There’s some mechanisms in theory and in practice to convey a bit of context to the block layer, but broadly speaking by virtue of being a mostly oblivious block level, you have to resort to the most naive and often inefficient approaches.
That said, block capacity is cheap, and doing things at the block level can be done in a ‘dumb’ way, which may be easier for an implementation to get right, versus a more clever approach with a bigger surface for mistakes.
They will require the requester to prove they control the standard http(s) ports, which isn’t possible with any nat.
It won’t work for such users, but also wouldn’t enable any sort of false claims over a shared IP.