

“No”
– everyone using compose to orchestrate software deployments
spoiler
Yes it is, and there’s a age old joke about docker being used for configuration management, which doesn’t require a container system.
“No”
– everyone using compose to orchestrate software deployments
Yes it is, and there’s a age old joke about docker being used for configuration management, which doesn’t require a container system.
I kinda hate to agree with the other suggestions here, but entry level and even dedicated NAS products are pretty expensive for providing something you can very easily DIY for significantly cheaper even with the latest hardware.
Was in a similar boat and just ended up taking an old HP desktop and added some cheap HDDs. I ended up playing around with proper Fedora for some LVM cache tricks and running some other services, but the common suggestion for this is SnapRAID and Nextcloud.
There’s more *arr tools that aren’t aggregator automation tools than there are aggregator automation tools.
Also It was only funny when using an existing words like "sonar, “radar”, “lidar”. Jellyseerr is dumb, even Jackett was pushing it.
I guess it makes it somewhat easier to associate them as part of a group of software, but now we have stuff like Homarr that is entirely unrelated, but still a useful tool.
Proxmox or even just lazy old KVM GUI for anything that needs to be deployed manually in a VM (Home Assistant, WIndows VM, etc.). Otherwise you can even just spin up whatever manual service you want to run on an LXC container or bare metal host with the correct security settings with systemd and selinux if you want to be extra careful.
Docker/Podman (the superior one lol) is just an automated deployment system in container form (like Ansible). It great for automated deployment without having to manually configure the installation process and worry about upgrades, changes, etc. You can even easily create your own images on the fly just for the purpose of having it run a single service inside a container.
Proxmox equivalent would be like using Terraform/OpenTofu to deploy VMs to do the same thing. Its possible, but just not that common because of the reduced overhead with containers, and well supported deployment images with docker/podman specifically.
Generally speaking, I’ve seen proxmox used more in lab environments were you want to emulate something like a complete network of machines whereas docker/podman has become the defacto server deployment platform.
You’re just much more likely to find software with a published docker container and default docker compose script than the same thing in Terraform or even K8s/K3s.
Does jellyfin do untranscoded video/audio?
Haven’t used it in years but finally building up my media server again and I remember it had some funky settings for hardware encoding back then which I didn’t need because I was connecting to it via a repurposed gaming laptop that could easily handle 4k content and surround sound by itself.
Ubuntu, and the experience was crap lol.
Then I got to try Debian on a server and it was much nicer.
Then I saw Torvalds uses Fedora, and given that he also disliked Debian and Ubuntu for their lack of end user ease, I switched and have been happy ever since.
Seriously though, GNOME 40 really should not be the default DE. It made me think Linux UI was years behind Windows when it was actually the opposite with proven DEs like XFCE, KDE, and GNOME 3/2 etc.
Xfce 4.20
On my way to attempt an upgrade from Xfce + Compiz to Xfce + Wayfire lol
Probably since it’s the main redhat upstream and they want the advantage of already widespread usage.
Although at that point why not OpenSUSE for the same reason you mentioned.
Lol I’ve locked myself out of so many random cloud and remote instances like this that now I always make a sleep chain or a kill timer with tmux/screen.
Usually like:
./risky_dumb_script.sh ; sleep 30 ; ./undo.sh
Or
./risky_dumb.script.sh
Which starts with a 30 second sleep, and:
(tmux) sleep 300 ; kill PID
You might want to check what the actual hardware is first. You’ll probably be fine, but client 802.11 hardware can sometimes be underwhelming for hosting because they don’t have good stuff like beefed up MuMIMO.
Although that’s assuming you will have a lot of traffic going through it, so you could always just test throughput and latency with iperf to see how well it functions.
It depends on what it is really + convenience. There are lots of morons out here running basic info sites on full beefy datacenter VMs instead of a proper cloud webhost service.
The most you’d be getting out of cloud is reliability. Self host assumes you don’t have any bottlenecks (easy enough to pass), but also 99% uptime which is impossible unless you are running with site redundancy (also possible, but I doubt how many people own multiple properties with their own distribute or private cloud solution).
if 95% uptime is acceptable, and you don’t live in an area with outage issues from weather, I’d say go for it. Otherwise, you can find some pretty cheap cloud solutions for basic websites. Even a cheapo VPS would probably work just fine.
I have run photoprism straight from mdadm RAID5 on some ye olde SAS drives with only a reduction in the indexing speed (About 30K photos which took ~2 hours to index with GPU tensorflow).
That being said I’m in a similar boat doing an upgrade and I have some warnings that I have found are helpful:
I’m personally going with the NVME scheduled backups to RAID because the caching just doesn’t seem worth it when I’m gonna be slamming huge media files around all day along with running VMs and other crap. For context, the 2TB NVME brand I have is only rated for 1200 TBW. That’s probably more then enough for a file server, but for my homelab server it would just be caching constantly with whatever workload I’m throwing at it. Would still probably last a few years no issues, but SSD pricing has just been awful these past few years.
On a related note, Photoprism needs to upgrade to Tensorflow 2 so I don’t have to compile an antiquated binary for CUDA support.
I’m using XFCE with Compiz, and since I have two monitors I have a 3D octagon instead of a 3D cube desktop.
inxi saves you time 90% of the time that you would use for lsXXX commands and grepping. Really useful for quick hardware and kernel module checks.
Unified system for popping tabs in and out as windows like a browser (mixed support).
Session handler for tying tabs into screen or tmux (you can do this by yourself, but it’s only useful sometimes).
As long as they can keep it rolling stable, which is possible even with arch, I can see this pickup up a bit, especially for new users.
Plenty of users are sick of windows 11.
KDE for best fully integrated, out of box, modern DE.
XFCE + Compiz if you’re running on lower end hardware (uses less ram and utilizes gpu better). Also if you want even more customization than KDE with the drawback of limited SVG support (and still on X11 if that matters for you)
GNOME if you hate yourself and want to use a knockoff of ChromeOS or Mac.
Cinnamon and MATE if you want to see when GNOME used to be good.
LXQt is the XFCE equivalent of KDE, but is now on wayland with GPU accel, so it can fit the same area as XFCE+Compiz.
Wayfire (compositor) basically Compiz for Wayland if you want all the fancy effects on anything that uses wayland.
Sometimes I wish someone would make a an Arch box and come back to it years later to see the updates it has missed.
But that’s assuming an Arch box would be reliable enough to stay alive that long lol.
Always heard of 20+ year old bsd and debian machines chugging along with no issue.
Cool for anyone not using AV1 which I assume is a big chunk of the userbase because not everyone has good AV1 hardware acceleration lol