

Linear density could also boost throughout. Multiple actuators also exist.


Linear density could also boost throughout. Multiple actuators also exist.


Getting a dns name is straightforward enough, and let’s encrypt to get a tla cert…
But for purely internal services that you didn’t otherwise want to publish extremely, the complexity goes way up (either maintain a bunch of domain names externally to renew certificates and use a private DNS to point them to the real place locally, or make your own CA and make all the client devices enroll it. Of course I’m less concerned about passkeys internally.


Broadly speaking, the private keys can be protected.
For ssh, ssh-agent can retain the viable form for convenience while leaving the ssh key passphrase encrypted on disk. Beyond that your entire filesystem should be further encrypted for further offline protection.
Passkeys as used in webauthn are generally very specifically protected in accordance with the browser restrictions. For example, secured in a tpm protected storage, and authenticated by pin or biometric.


For ssh, ssh keys.
For https, webauthn is the way to do it, though services are relatively rare, particularly for self hosting, partly because browsers are very picky about using a domain name with valid cert, so browsers won’t allow them by ip or if you click through a self signed cert
Hardware raid limits your flexibility, of any part fails, you probably have to closely match the part in replacement.
Performance wise, there’s not much to recommend them. Once upon a time the xor calculations weighed on CPU enough to matter. But cpus far outpaced storage throughput and now it’s a rounding error. They continued some performance edge by battery backed ram, but now you can have nvme as a cache. In random access, it can actually be a lability as it collapses all the drive command queues into one.
The biggest advantage is simplifying booting from such storage, but that can be handled in other ways that I wouldn’t care about that.
While sas is faster, the difference is moot if you have even a modest nvme cache.
I don’t know if it’s especially that much more reliable, especially I would take new SATA over second hand sas any day.
The hardware raid means everything is locked together, you lose a controller, you have to find a compatible controller. Lose a disk, you have to match pretty closely the previous disk. JBOD would be my strong recommendation for home usage where you need the flexibility in event of failure.


the TLS-ALPN-01 challenge requires a https server that implements generating a self-signed certificate on demand in response to a specific request. So we have to shut down our usual traffic forwarder and let an ACME implementation control the port for a minute or so. It’s not a long downtime, but irritatingly awkward to do and can disrupt some traffic on our site that has clients from every timezone so there’s no universal ‘3 in the morning’ time, and even then our service is used as part of other clients ‘3 in the morning’ maintenance windows… Folks can generally take a blip in the provider but don’t like that we generate a blip in those logs if they connect at just the wrong minute in a month…
As to why not support going straight to 443, don’t know why not. I know they did TLS-ALPN-01 to keep it purely as TLS extensions to stay out of the URL space of services which had value to some that liked being able to fully handle it in TLS termination which frequently is nothing but a reverse proxy and so in principle has no business messing with payload like HTTP-01 requires. However for nginx at least this is awkward as nginx doesn’t support it.


Frankly, another choice virtually forced by the broader IT.
If the broader IT either provides or brokers a service, we are not allowed to independently spend money and must go through them.
Fine, they will broker commercial certificates, so just do that, right? Well, to renew a certificate, we have to open a ticket and attach our csr as well as a “business justification” and our dept incurs a hundred dollar internal charge for opening that ticket at all. Then they will let it sit for a day or two until one of their techs can get to it. Then we are likely to get feedback about something like their policy changing to forbid EC keys and we must do RSA instead, or vice versa because someone changed their mind. They may email an unexpected manager for confirmation in accordance to some new review process they implemented. Then, eventually, their tech manually renews it with a provider and attaches the certificate to the ticket.
It’s pretty much a loophole that we can use let’s encrypt because they don’t charge and technically the restrictions only come in when purchasing is involved. There was a security guy raising hell that some of our sites used that “insecure” let’s encrypt and demanding standards change to explicitly ban them, but the bearaucracy to do that was insurmountable so we continue.


They in fact refuse to even do a redirect… it’s monumentally stupid and I’ve repeatedly complained, but ‘security’ team says port 80 doing anything but dropping the packet or connection refused is bad…


The same screwed up IT that doesn’t let us do HTTP-01 challenges also doesn’t let us do DNS except through some bs webform, and TXT records are not even vaguely in their world.
It sucks when you are stuck with a dumber broad IT organization…


Ours is automated, but we incur downtime on the renewal because our org forbids plain http so we have to do TLS-ALPN-01. It is a short downtime. I wish let’s encrypt would just allow http challenges over https while skipping the cert validation. It’s nuts that we have to meaningfully reply over 80…
Though I also think it’s nuts that we aren’t allowed to even send a redirect over 80…


Most people I know haven’t even bothered to buy a new TV since Dolby Vision was created. A fair number still have 1080 sets.
While some like you may certainly demand it and it would be a good idea, I think it’s a fair description to help people understand the goal is an android TV like experience, and a lot of people are oblivious to a lot of the details of picture quality.
Just a bit over the top for such an overly dismissive statement, versus saying something like “does it support Dolby vision? I won’t be interested until it does”


They will require the requester to prove they control the standard http(s) ports, which isn’t possible with any nat.
It won’t work for such users, but also wouldn’t enable any sort of false claims over a shared IP.


If you can get their servers to connect to that IP under your control, you’ve earned it
This sounds like a whole lot of convoluted bullshit to use Plex locally and “looking local” through VPN solutions when you could just roll a Jellyfin instance and do things a more straightforward way…


can’t even do video playback on VLC.
I remember back in the day when I downloaded the first divx file my K6-400 couldn’t smoothly play… I had been so used to thinking of that as a powerhouse coming from my Pentium 60, which was the first one I ran Linux on.


You can use one of a few ways to use the TPM to auto decrypt on boot without passphrase. Systemd cryptenroll is my favorite.


Because it says to do so?
Proxmox uses Debian as the OS and for several scenarios it says do Debian to get that done and just add the proxmox software. It’s managing qemu kvm on a deb managed kernel


It depends on what you want to self host.
As an example, a family member self hosted home assistant. They didn’t have to know anything really. That was all they were doing and they bought the canned implementation.
If you have multiple services, you may need to know nginx configuration with virtual hosting.
You may want to use podman or docker or kubernetes.
It all depends …
Retaining that much detail on tentacles takes some drive space