

Fair criticism. I just don’t have a lot of free time. I can invest in Element but I wanted to crowd source information to see if it was worth it or if there was an easier way. It doesn’t get much easier than Docker


Fair criticism. I just don’t have a lot of free time. I can invest in Element but I wanted to crowd source information to see if it was worth it or if there was an easier way. It doesn’t get much easier than Docker


Out of curiosity, what makes it better?
A quick search says it’s a package manger for kubernetes. Besides plex, everything I selfhost is just for me. Would you say helm/kubernetes is worth looking into for a hobbyist who doesn’t work in the tech field?


Linux has gotten really good over the last ~15 years. It used to be that if you didn’t have the most up to date packages, you would be missing game changing features. Now, the distribution you use almost doesn’t matter because even the older packages are good enough for most things.
To answer your question, if it weren’t for gaming, no I wouldn’t mind using Debian as my daily driver. If I ever needed a new package for whatever reason, I would use flatpaks, snaps, docker, or Distrobox to get it.


Personally, yeah it’s the old packages. I want to play games on my desktop and have the newest DE features. An arch based distro seems like it’ll keep up better than Debian.
For my servers though, I only use Debian.


I’m assuming you mean LXC? It’s doable but without some sort of orchestration tools like Nix or Ansible, I imagine on-going maintenance or migrations would be kind of a headache.


You might come across docker run commands in tutorials. Ignore those. Just focus on learning docker compose. With docker compose, the run command just goes into a yaml file so it’s easier to read and understand what’s going on. Don’t forget to add your user to the docker group so you aren’t having to type sudo for every command.
Commands you’ll use often:
docker compose up - runs container
docker compose up -d - runs container in headless mode
docker compose down - shuts down container
docker compose pull - pulls new images
docker image list - lists all images
docker ps - lists running containers
docker image prune -a - deletes images not being used by containers to free up space


he is still completely new to this so I want things to work out perfectly for his first experience.
Of the two options you gave, I’d go with Mint. If your friend runs into a problem, it would probably be easier to diagnose the issue since it’s just Ubuntu/Debian under the hood.
Once they get used to it, they can try other gaming specific distros if they want to try to get a little more performance.


Should I just learn how to use Docker?
Yes. I put off learning it for so long and now can’t imagine self-hosting anything without it. I think all you have to do is set a static IP to the NIC from your router and then specify the IP and port in a docker-compose.yml file:
Ex:
IP-address:external-port:container-port
services:
app-name:
ports
- 192.168.1.42:3000:3000


thanks, I’ll look into it. Much appreciated


I’ve never looked into adding GitHub releases to FreshRSS. Any tips for getting that set up? Is it pretty straight forward?


TIL. Thanks for the information


I’m currently not in a situation where swap is being used so I think my system is doing fine right now. I’m not against swap, I get it’s better to have it than not but my intention was to figure out how close is my system getting to using swap. If it went from not using swap at all to using it constantly, I’d probably want to upgrade my ram, right? If nothing else just to avoid system slow downs and unneeded wear on my SSD


From what I can tell, my system isn’t currently using swap at all but it does have 8GB of available swap if needed.
To make sure I’m following what you are saying, if I upgraded my system to 64GB and changed nothing else, and let’s assume ZFS didn’t trying caching more stuff, would there still be a potential for my system to use swap just because the system wanted to even if it wasn’t memory constrained?


Came across some more info that you might find interesting. If true, htop is ignoring the cache used by ZFS but accounting for everything else.


Assuming the info in this link is correct, ZFS is using ~20GB for caching which makes htop’s ~8GB of in use memory make sense when compared with the results from cat /proc/meminfo. This is great news.
My results after running cat /proc/spl/kstat/zfs/arcstats:
c 4 19268150979
c_min 4 1026222848
c_max 4 31765389312
size 4 19251112856


Thank you for the detailed explanation


You’re an angel. I don’t know what the fuck htop is doing showing 8GB in use Based on another user comment in this thread, htop is showing a misleading number. For anyone else who comes across this, this is what I have. This makes the situation seem a little more grim. I have ~2GB free, ~28GB in use , and of that ~28GB only ~3GB is cache that can be closed. For reference, I’m using ZFS and roughly 27 docker containers. It doesn’t seem like there is much room for future services to selfhost.
MemTotal: 30.5838 GB
MemFree: 1.85291 GB
MemAvailable: 4.63831 GB
Buffers: 0.00760269 GB
Cached: 3.05407 GB


That’s pretty much where I’m at on this. As far as I’m concerned, if my system touches SWAP at all, it’s run out of memory. At this point, I’m hoping to figure out what percent of the memory in use is unimportant cache that can be closed vs important files that process need to function.


Is there a good way to tell what percent of RAM in use is used by less important caching of files that could be closed without any adverse effects vs files that if closed, the whole app stops functioning?
Basically, I’m hoping htop isn’t broken and is reporting I have 8GB of important showstopping files open and everything else is cache that is unimportant/closable without the need to touch SWAP.
I’ll look into it, thanks.
I’m still in the information gathering phase. Do you know if the element client works with the continuwuity server? Is it as easy as entering the domain, user, and password in the client?