

I did synapse about a year ago but kind of wish I had done conduit, it seems so much simpler. That being said, all of the bridges and add-ons assume you’re running synapse
Little bit of everything!
Avid Swiftie (come join us at !taylorswift@poptalk.scrubbles.tech )
Gaming (Mass Effect, Witcher, and too much Satisfactory)
Sci-fi
I live for 90s TV sitcoms
I did synapse about a year ago but kind of wish I had done conduit, it seems so much simpler. That being said, all of the bridges and add-ons assume you’re running synapse
I found proxmox and docker to be fairly incompatible, and went through many iterations of different things to make it work well. Docker in VMs, Docker in LXC, Docker on the host (which felt redundant as hell). Proxmox is an amazing hypervisor, but then I realized I didn’t really need a hypervisor since I was mostly running containers.
My recommendations:
No need for VMs Just run debian and run containers on it
Some VMs, Mostly containers, 1 host Run proxmox, and create a VM in proxmox for your contianer workloads
Some VMs, Mostly containers, >1 host, easy mode Same as above, but make one host debian and the other one proxmox
Some VMs, Mostly Containers, >1 host, hard mode but worth it after 2 years Use kubernetes, I use k3s. Some nodes are just debian with k3s on them, others are running in VMs on proxmox using the extra compute available. This has a massive learning curve though, it took me well into a year to finally having it at a state I like it - but I’ll never go back.
The only way to be a truly moral person on this planet is to not participate in society and go completely 100% off grid. Even then the Good Place did a great episode on that, and they’re right, you’re not really living then either. It’s all just about what you’re willing to put up with
It’s him, he’s the chosen one. The one who gets to use the software without agreeing to anything. The one who will bring peace
Only pain will you find down that path. I did that for years, but it’s a pain. You have to disable so many security features, and I found it to be incredibly brittle. I found myself fearing all proxmox upgrades because each time it would break the lxcs. I wish you luck
Intel or no Intel, it’ll be fine. Personally though for your primary router, I recommend you get 10G if you aren’t doing that already. Even if you won’t use it yet, get it now and thank yourself later
Wait didn’t they kill off the self hosted a few years ago? Now it’s back?
Try it out, just make sure their software isn’t so locked down that there’s no way to send files in remotely
oh yeah… they’re “white labeling” their own brand of drives and if you use anything else it’ll bitch at you. I think for now it still lets you, but their OS definitely shows you’re not using a “proper” drive. May want to keep an eye on that.
I think you already know, AIOs are the go-to, just make sure you can connect in. I’ve done this with Synology, works fine, I used sftp to sync things. If you want cheaper you can look into a standard linux host and mergerfs/snapraid, but it’s going to be a much higher learning curve, and a much higher risk of failure. If you’re just getting up and started don’t overthink it. It’s good to plan for tomorrow, but think about how much data everyone has, and how much you’ll use today, and then double that. That’ll be a good baseline.
If you’re US based, a trick, buy the WD Elements drives from Best Buy. They go on sale regularly pretty much whenever there is a holiday sale and “shuck” them (plenty of videos on Youtube for how to do this). You’ll save probably double the cost on drives.
From my point of view, you have two separate things.
First, you have a “business”/user case, you need a way for people to sync data with you. For this, it’s a solved problem. Use Nextcloud/Owncloud/something with an app and a decent user experience for this. Whatever you like. On your primary “home” location, set this up, and have people start syncing data to you.
Second is the underlying storage. For this, again it’s up to you, but personally I’d have a large NAS at home (encrypted), which is sync’d either in realtime or nightly (using something like cron/rclone) to the other locations (also encrypted, so not even they can see it).
Their portal to this data storage is the nice user experience like Nextcloud. They don’t have to worry about how data is synced or managed. Nextcloud also supports quotas so you can specify how much they all get (so you don’t have to deal with partitioning).
This approach will be much less headache for you. I think I understand what you’re asking, where your original thought was just a dump of storage that is separate, but I think this is a better approach - both in terms of your sanity maintaining it and also their own usability.
Sorry I usually work directly with ffmpeg and that’s how I had the hunch, those presets are h264 presets that go in. But does it matter, you said you were only curious, I think that’s why, so now you know
That honestly sounds a lot nicer, I’ll have to check it out, thanks!
IIRC, presets are machine dependent, or at least they are optimized for the hardware. So if you have a weak machine, veryfast will use even worse encoding because you’re simply saying “I just want it done fast”, while on a more robust machine it may take more liberties. To match it completely I think you need to disregard the presets and set everything yourself, like CRF and everything.
Nextcloud remains one of the buggiest and half finished projects I’ve seen in the homelab space. I swear most of my time using nextcloud has been trying to repair nextcloud
Glad to be of help. It is the right decision, I have no regrets if migrating, but it is a long process. Just getting my first few services running was months, just so you are aware of that commitment, but it’s worth it.
I’ll post more later (reply here to remind me), but I have your exact setup. It’s a great way to learn k8s and yes, it’s going to be an uphill battle for learning - but the payoff is worth it. Both for your professional career and your homelab. It’s the big leagues.
For your questions, no to all of them. Once you learn some of it the rest kinda falls together.
I’m going into a meeting, but I’ll post here with how I do it later. In the mean time, pick one and only one container you want to get started with. Stateless is easier to start with compared to something that needs volumes. Piece by piece brick by brick you will add more to your knowledge and understanding. Don’t try to take it all on day one. First just get a container running. Then access via a port and http. Then proxy. Then certs. Piece by piece, brick by brick. Take small victories, if you try to say “tomorrow everything will be on k8s” you’re setting yourself up for anger and frustration.
@sunoc@sh.itjust.works Edit: To help out I would do these things in these steps, note that steps are not equal in length, and they are not complete - but rather to help you get started without burning out on your journey. I recommend just taking each one, and when you get it working rather than jumping to the next one, instead taking a break, having a drink, and celebrating that you got it up and running.
Start documenting everything you do. The great thing about kubernetes is that you can restart from scratch if you have written everything down. I would start a new git repository with a README that contains every command you ran, what it did, and why you did it. Assume that you will be tearing down your cluster and rebuilding it - in fact I would even recommend that. Treat this first cluster as your testing grounds, and then you won’t feel crappy spinning up temporary resources. Then, you can rebuild it and know that you did a great job - and you’ll feel confident in rebuilding in case of hardware failure.
Get the sample nginx pod up and running with a service and deployment. Simply so you can curl
the IP of your main node and port, and see the response. This I assume you have played with already.
Point DNS to your main node, get the nginx pod with http://your.dns.tld:PORT
. This should be the same as anything you’ve done with docker before.
Convert the yaml to a helm chart as other have said, but don’t worry about “templating” yet, get comfortable with helm install
, helm upgrade -i
, and helm uninstall
. Understand what each one does and how they operate. Then go back and template, upgrade-ing after each change to understand how it works. It’s pretty standard to template the image and tag for example so it’s easy to upgrade them. There’s a million examples online, but don’t go overboard, just do the basics. My (template values.yaml) usually looks like:
<<servicename>>
name: <<servicename>>
image:
repository: path/to/image
tag: v1.1.1
network:
port: 8888
Just keep it simple for now.
istio
. I can go into more details why later, but I like that I can create a “VirtualService” for "$appname.my.custom.tld` and it will point to it.nginx.your.tld
and be able to curl http://nginx.your.tld
and see that it routes properly to your sample nginx service. Congrats, this is a huge one.Certificate
types in k8s. You’ll need to use the proxy in the previous step to route the /.well-known endpoints on the http port from the open web to cert-manager, for Istio this was another virtual service on the gateway - I assume Traefic would have something similar to “route all traffic on port 80 that starts with /.well-known to this service”. Then, in your nginx helm chart, add in a Certificate type for your nginx endpoint, nginx.your.tld
, and wait for it to be successfully granted. With Istio, this is all I need now to finally curl https://nginx.your.tld
!At this point you have routing, ports, and https set up. Have 2 drinks after this one. You can officially deploy any stateless service at this point.
Now, the big one, stateful. Longhorn is a bear, there are a thousand caveats to it.
Step one is where are your backups going to go. This can be a simple NFS/SMB share on a local server, it can be an s3 endpoint, but seriously this is step 1. Backups are critical with longhorn. You will fuck up Longhorn - multiple times. Losing these backups means losing all configs to all of your pods, so step one is to decide on your stable backup location.
Now, read the Longhorn install guide: https://longhorn.io/docs/1.9.0/deploy/install/. Do not skip reading the install guide. There are incredibly important things in there that I regretted glossing over that would have saved me. (Like setting up backups first).
The way I use longhorn is to create a PV in longhorn, and then the PVC (you can look up what both of these are later). Then I use Helm to set what the PVC name is to attach it to my pod. Try and do this with another sample pod. You are still not ready to move production things over yet, so just attach it to nginx. exec
into it, write some data into the pvc. Helm uninstall. See what happens in longhorn. Helm install. Does your PVC reattach? Exec in, is your data still there? Learn how it works. I fully expect you to ping me with questions at this point, don’t worry, I’ll be here.
Longhorn will take time in learning, give yourself grace. Also after you feel comfortable with it, you’ll need to start moving data from your old docker setup to Longhorn, and that too will be a process. You’ll get there though. Just start with some of your lower priority projects, and migrate them one by one.
After all of this, there is still more. You can automount smb/nfs shares directly into pods for media or anything. You can pass in GPUs - or I even pass in some USB devices. You can encrypt your longhorn things, you can manage secrets with your favorite secret manager. There’s thousands of things you’ll be able to do. I wish you luck, and feel free to ping me here or on Matrix (@scrubbles@halflings.chat) if you ever need an ear. Good luck!
Yeah with Amazon’s sheer size this has definitely been done before, curious what limits op is going to hit. My guess is they have a quota for submissions, and they’ll be banned from submitting tickets.
I mean go for it? They literally can’t do anything, you might as well complain that fire is hot though. It’s part of being in the Internet. They provide safety gloves, via VPCs and firewalls, but if you choose not to use them then… yeah I mean youre probably gonna get burned
I like that it says they are different from projects like TubeArchivist, says “read the readme for why”, and then it doesn’t have an explanation.
I’ve been using TubeArchivist for years now and it’s been great. Solid update cadence, reliable, and I have no idea why this project is different from it