Apt-cacher-ng doesn’t tend to expire automatically. It can be configured to keep the last version regardless. https://www.unix-ag.uni-kl.de/~bloch/acng/html/maint.html#extrakeep
Apt-cacher-ng doesn’t tend to expire automatically. It can be configured to keep the last version regardless. https://www.unix-ag.uni-kl.de/~bloch/acng/html/maint.html#extrakeep
One can also use a cache to hold deb and rpm files requested by the machines. (Works great when running hundreds of systems.)
I like “apt-cacher-ng”. It will do deb and rpm. https://wiki.debian.org/AptCacherNg
https://www.unix-ag.uni-kl.de/~bloch/acng/
Edit: better link
Offline repository caches for Linux have been a thing for decades. People absolutely pass binaries to friends.
Flatpac may not be suitable, but that is only one way to get software on Linux.
Pretty much every Windows machine I’ve ever owned after a certain year requires you to type in your Bitlocker key, including my first-gen Surface Go from 2018.
This is interesting. I had a work computer require this ~4 years ago, but not one of the three since have (personal and different employers.)
Other options are LUKS with Tang and Clevis, or LUKS with SSH and Dropbear.
Sorry, I have no details.
Edit: Tang/Clevis are local software and a network server that provide keys. If stolen, won’t boot.
SSH and Dropbear make it so you can login to provide keys.
It is complicated. There are several options, each with tradeoffs in functionality, compatible software, and performance.
A simple method is to use one system as a desktop, and SSH into the others as “headless”.
Other options include making a K8s or HPC cluster (there are other cluster types).
Spreading a single set of communicating processes requires a low latency interconnect. Something better than Ethernet, like Infiniband. But many programs don’t support that.
What features do you want?
What should my first configurations and preparations
Write on paper your goals. Write on paper a list of your systems and what needs to speak with what.
Then pick the most important or simplest device and get it connected the way you want.
At home, colors Whatever color the purpose is.
You put lots of time and effort in. Now it will be discarded due to decisions of others.
Sad and/or disappointed feelings are normal.
Take care of yourself.
Look up what system vendors will sell for that CPU. If they sell 256 GiB, then you are likely good.
I don’t find I ever upgrade after the first couple months. I would max it out or get multi CPU boards wherI cannot afford to max it out.
Phrasing.
A Linux maintainer wants to keep quality high. Objects to adding complexity to codebase.
Right or wrong, we want the maintainers focused on quality and maintainability.
Debian and a BSD (FreeBSD is nice) can run for years without a reboot.
Certain activities will often push a machine to crash. 3D gaming, network drive mounts on an unstable network, and some drivers.
No distro is going to fix a true hardware problem.
DNAM. Is or used to be on the UBCD.
For the future remember, encryption helps when the disk is no longer operational.
Yes, and no.
Some settings are harder to circumvent, like partition limits, cgroups, and sysconfig. Others are more suggestion than limit, like shell. DNS server and ssh server settings only require a knowledgeable person to circumvent.
It is best to use layers. Helpfully provide working configs. Kindly provide limits to dissuade ill use. Keenly monitor for the unexpected. Strongly block on many layers the forbidden. Come down like the hammer of god on anyone and anything that still gets through.
Without the error messages, it sounds like a security mechanism on the server side.
Any chance the errors are due to too many login attempts, or bad password?
The thing is… The upgrade path degrades. Once one is 3 or more major versions behind, upgrading becomes technically challenging. (I have done this a few times…) It is better to just reinstall.
That said, a Debian system that works won’t just stop working. My Raspberry Pi 2 has no issues since the initial install.
Professionally, it is better to have a fast recovery path. PXE boot, Debian preseed, a config management system (Ansible, Puppet, etc) and local caches and you can be set in 10 minutes. (After years of setting all of that up.)
Nice!
I am currently setting up a FreeBSD ZFS file server. Software installs are so fast I thought they failed. (OS installer needs quality of life improvemens.)
Yes, normal. It is good for you and it is good for Linux.
Distros try different things, and it is good to be exposed to many of those. It helps to discover the most functional ideas and cross pollinate.
Wait until you try non-linux FOSS OSes…
Easier to distro hope if your data is safe elsewhere.
The joke is electricity and Linux.
The real answer is the free hardware.
My main reliable is from 2008? It cannot do modern virtualization due to not having the CPU instruction sets.