Anyone else just sick of trying to follow guides that cover 95% of the process, or maybe slightly miss a step and then spend hours troubleshooting setups just to get it to work?
I think I just have too much going in my “lab” the point that when something breaks (and my wife and/or kids complain) it’s more of a hassle to try and remember how to fix or troubleshoot stuff. I lightly document myself cuz I feel like I can remember well enough. But then it’s a style to find the time to fix, or stuff is tested and 80%completed but never fully used because life is busy and I don’t have loads of free time to pour into this stuff anymore. I hate giving all that data to big tech, but I also hate trying to manage 15 different containers or VMs, or other services. Some stuff is fine/easy or requires little effort, but others just don’t seem worth it.
I miss GUIs with stuff where I could fumble through settings to fix it as is easier for me to look through all that vs read a bunch of commands.
Idk, do you get lab burnout? Maybe cuz I do IT for work too it just feels like it’s never ending…
I’m currently running three hosts with a collection of around 40 containers.
one is the house host, one is the devops host, and one is the AI host.
I maintain images on the devops host and deploy them regularly. when one goes down or a container goes down, I am notified through mqtt on my phone. all hosts, services, ports, certs, etc are monitored.
no problems here. git gud I suppose?
And honestly, 40 isn’t even impressive. I run more than that on one host. Containers make life so much easier is unreal.
Once you understand them, I suppose its easier. I’ve got a mix of win10, Linux VMs, RPis, and docker.
Having grown up on Windows, it’s second nature now and I do it for work too. I stated on Linux only around 2010 or so but kept flipping between the2 . anymore, trying to cut the power bill and went RPi but also trying to cut others and so docker is still relatively new in the last few years. Understand that I also do it few and far between at times on projects so is hard to dedicate time to learn enough to be comfortable. It also didn’t help I started on Docker Desktop and apparently everyone hates that and may have been a part of my problem adopting it.
I probably also started with linux seriously around that time frame. I was also a Windows admin back then. Transitioning to Linux and containers was the best thing ever. You get out of dependency hell and having kruft all over your filesystem. I’m extremely biased though, I work for Red Hat now. Containers and Linux are my day job.
Dang, how’d you make that transition? Are you a dev or SWE?
I just liked linux better so I learned it. That’s kind of my whole career, I want to do something so I get certified in it and start looking to get into it. I’m in consulting. I come in and help people setup OpenShift while teaching them how to use it and then move on to the next customer.
I manage all my services with systems. Simple services like kanidm, that are just a single native executable run baremetal with a different user. More complex Setups like immich or anything that requires a pzthon venv runs from a docker compose file that gets managed by systemd. Each service has its own user and it’s own directory.
It’s a mess. I’m even moving to a different field in it due to this.
You should take notes about how you set up each app. I have a directory for each self hosted app, and I include a README.md that includes stuff like links to repos and tutorials, lists of nuances of the setup, itemized lists of things that I’d like to do with it in the future, and any shortcomings it has for my purposes. Of course I also include build scripts so I can just “make bounce” and the software starts up without me having to remember all the app-specific commands and configs.
If a tutorial gets you 95% of the way, and you manage to get the other 5% on your own, write down that info. Future you will be thankful. If not, write a section called “up next” that details where you’re running into challenges and need to make improvements.
I started a blog specifically to make me document these things in a digestable manner. I doubt anyone will ever see it, but it’s for me. It’s a historical record of my projects and the steps and problems experienced when setting them up.
I’m using 11ty so I can just write markdown notes and publish static HTML using a very simple 11ty template. That takes all the hassle out of wrangling a website and all I have to do is markdown.
If someone stumbles across it in the slop ridden searchscape, I hope it helps them, but I know it will help me and that’s the goal.
Would love to see the blog
I found a git repo with docker compose and the config files works well enough as long as you are willing to maintain a backup of the volumes and an .env file on KeePass (also backed up) for anything that might not be OK on a repo (even if private) like passwords and keys.
As an example, I was setting up SnapCast on a Debian LXC. It is supposed to stream whatever goes into a named pipe in the /tmp directory. However, recent versions of Debian do NOT allow other processes to write to named pipes in /tmp.
It took just a little searching to find this out after quite a bit of fussing about changing permissions and sudoing to try to funnel random noise into this named pipe. After that, a bit of time to find the config files and change it to someplace that would work.
Setting up the RPi clients with a PirateAudio DAC and SnapCast client also took some fiddling. Once I had it figured out on the first one, I could use the history stack to follow the same steps on the second and third clients. None of this stuff was documented anywhere, even though I would think that a top use of an RPi Zero with that DAC would be for SnapCast.
The point is that it seems like every single service has these little undocumented quirks that you just have to figure out for yourself. I have 35 years of experience as an “IT Guy”, although mostly as a programmer. But I remember working HP-UX 9.0 systems, so I’ve been doing this for a while.
I really don’t know how people without a similar level of experience can even begin to cope.
I definitely feel the lab burnout, but I feel like Docker is kind of the solution for me… I know how docker works, its pretty much set and forget, and ideally its totally reproducible. Docker Compose files are pretty much self-documenting.
Random GUI apps end up being waaaay harder to maintain because I have to remember “how do I get to the settings? How did I have this configured? What port was this even on? How do I back up these settings?” Rather than a couple text config files in a git repo. It’s also much easier to revert to a working version if I try to update a docker container and fail or get tired of trying to fix it.
Yeah that’s part of having a hobby. If you do it for work too I can understand getting sick of it. But, no one is making you do it. If you don’t enjoy it, don’t do it.
While this might be a healthy outlook, these days more and more people do not feel like self hosting is a hobby or an option, but a necessity for a free and fair society.
This. I self host some things because it’s just fun, other things because of censorship, other things because of privacy. I probably wouldn’t have Nextcloud if Google wasn’t collecting so much data. Probably wouldn’t be self-hosting my blog if content weren’t as censored everywhere. I probably would still be self-hosting a Minecraft server with a small website for said server that the members of the server can contribute to when they find/do something cool.
Nextcloud is on my list lol, but I need to run a separate box for it I think vs visualizing. It would be easier/cleaner and more reliable.
Yeah, it’s definitely one of those that’s also just… Useful. I usually don’t go for software that’s trying to do too much, but for some reason I don’t mind having nextcloud as 10 different things xD Sync files, sync podcast listens, sync my RSS feeds… A lot of things all in one
This sooo much. I’m not a tech person but I’m trying to learn because the giant corporations are clearly evil. I just want to have a modicum of privacy in my corner of the world so here I am trying to figure out how to self host some basic services.
Trying to get peertube installed just to be able to organize my video library was pain.
Use portainer for managing docker containers. I prefer a GUI as well and portainer makes the whole process much more comfortable for me.
Why did I never think of that?! That would make sense lol. Thank you!
No problem. I have been using it for a while and I really like it. There’s nothing stopping you from doing it the old fashioned way if you find you don’t like portainer but once you familiarize yourself with it I think you’ll be hooked on the concept.
+1 for Portainer. There are other such options, maybe even better, but I can drive the Portainer bus.
just know that sometimes their buggy frontend loads the analytics code even if you have opted outm there’s an ages old issue of this on their github repo, closed because they don’t care.
It’s matomo analytics, so not as bad as some big tech, but still.
Just 15 containers? lol
do you get lab burnout
Not really. I have everything set the way I want it and it’s stable. On occasion, I’ll see a container that catches my fancy, so I’ll spin it up on a test server, dick around with it, and monitor it before I ever decide to put it on my production server. On occasion I’ll have to fix, or adjust something. Most of the time I’m just enjoying it. I wouldn’t say I was running anything super complex tho.
As far as time, I’ve got you beat there most likely. Used to be lickity-split, but then you get old, things slow down. LOL Also, there is only one user…me. I realize you have family, but my hard and fast rule is: Multiple users cause issues, so I don’t share. I’d say, go spend your time with the family. That’s the most important.
I’m with you on the incomplete guides. There always seem to be that one ‘secret’ ingredient’ that just didn’t get documented. And to the devs of the opensource software, me love you long time, but please include a screenshot.
My biggest problem is every docker image thinks they’re a unique snowflake and how would anyone else be using such a unique port number like 80?
I know I can change, believe me I know I have to change it, but I wish guides would acknowledge it and emphasize choosing a unique port.
Most put it on port 80 with the perfectly valid assumption that the user is sticking a reverse proxy in front of it. Container should expose 80 not port forward 80.
There are no valid assumptions for port 80 imo. Unless your software is literally a pure http server, you should assume something else has already bound to port 80.
Why do I have vague memories of Skype wanting to use port 80 for something and me having issues with that some 15 years ago?
Edit: I just realized this might be for containerized applications… I’m still used to running it on bare metal. Still though… 80 seems sacrilege.
Why expose any ports at all. Just use reverse proxy and expose that port and all the others just happen internally.
Still gotta configure ports for the reverse proxy to access.
Reverse proxy still goes over a port
Containers are ment to be used with docker networks making it a non-issue, most of the time you want your services to forward 80/443 since thats the default port your reverse proxy is going to call
I reject a lot of apps that require a docker compose that contains a database and caching infrastructure etc. All I need is the process and they ought to use SQLite by default because my needs are not going to exceed its capabilities. A lot of these self hosted apps are being overbuilt and coming without defaults or poor defaults and causing a lot of extra work to deploy them.
Some apps really go overboard, I tried out a bookmark collection app called Linkwarden some time ago and it needed 3 docker containers and 800MB RAM
Found an alternative solution to recommend?
No, but I’d like to hear it if anyone else finds one
What is your setup? I have TrueNAS and there I use the apps that are easy to install (and the catalog is not small) and maintain. Basically from time to time I just come and update (one button click). I have networking separate and I had issues with Tailscale for some time, but there I had only 4 services in total, all docker containers and all except the Tailscale straight forward and easy to update. Now I even moved those. One as a custom app to TrueNAS and the rest to proxmox LXC - and that solved my tailscale issue as well. And I am having a good time. But my rule of thumb - before I install anything I ask myself if I REALLY need this, because otherwise I would end up with like a jillion services that are cool, but not really that useful or practical.
I think what I would recommend to you, find platform like TrueNAS, where lots of things is prepared for you and don’t bother too much with the custom stuff if you don’t enjoy. Also I can recommend having a test rig or VM so that you can always try first, if its easy to install and stable to use. There were occasions when I was trying stuff and it was just bothersome, I had to hack stuff and I was glad in the end I didn’t “pollute” my main server with it.
Sounds like you haven’t taken the time to properly design your environment.
Lots of home gamers just throw stuff together and just “hack things till they work”.
You need to step back and organize your shit. Develop a pattern, automate things, use source control, etc. Don’t just file follow the weirdly -opinionated setup instructions. Make it fit your standard.
Also on top of that, find time to keep it up to date. If leave it rot things will get harder to maintain.
I sit down once a week and go over all the updates needed, both the docker hosts and all the images they run.
This. I definitely need to take the time to organize. A few months ago, I setup a new 4U rosewill case w 24 hotswap as bays. Expanded my storage quite a bit, but need to finish moving some services too. I went from a big outdated SMC server to reusing an old gaming mobo since its an i7 but 95w vs 125wx2 lol.
It took a week just to move all my Plex data cuz that Supermicro was only 1GbE.
only 1gbE
What needs more than 1gbe? Are you streaming 8k?
Sounds like you are your own worst enemy. Take a step back and think about how many of these projects are worth completing and which are just for fun and draw a line.
And automate. There are tools to help with this.
What needs more than 1gbe? Are you streaming 8k?
I think they wanted to mean it was a bottleneck while moving to the new hardware
Yeah, transferring 80TB took what felt like an eternity. My Plex has a 2.5GbE and my switch is 10GbE but my SFP+ NIC in the storage wasn’t playing well…
deleted by creator










