I am working on setting up a home server but I want it to be reproducible if I need to make large changes, switch out hardware, or restore from a failure. What do you use to handle this?

  • irmadlad@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    22 days ago

    I use snapshots, once a month an image is made of the entire drive, and I have Duplicati that backs up to cloud. Whatever choice you make tho, remember 3,2,1, and backups are useless unless tested on a regular basis. The test portion always gives me anxiety.

    • MonkeMischief@lemmy.today
      link
      fedilink
      English
      arrow-up
      11
      ·
      22 days ago

      I’d really like to know if there’s any practical guide on testing backups without requiring like, a crapton of backup-testing-only drives or something to keep from overwriting your current data.

      Like I totally understand it in principle just not how it’s done. Especially on humble “I just wanna back up my stuff not replicate enterprise infrastructure” setups.

  • wersooth@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    18 days ago

    Currently I’m migrating from compose.y(a)ml to terraform. I’m using proxmox -> 2x VM -> docker swarm. I will soon try to engineer a solution to quickly scale up and down any service I want using the same terraform codebase with rundeck. I have my configs as terraform templates and it gets deployed as a swarm config (or secret), then mapped to the container the same way.

  • _cryptagion [he/him]@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    22 days ago

    Well I use Unraid, so I just back up my whole config folder along with the OS itself in case I need to flash it to a new USB. In other words, I just clone the whole thing. It means I can be up and running in a few minutes if everything was corrupted.

    A data drive loss is pretty simple too, I just simulate the lost data until I can get a new HDD in. That takes a little longer to fix tho.

    • turmacar@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      21 days ago

      I think it gets some flak but I’ve been super happy with Unraid.

      Migrated hardware by moving the usb drive over to the new system and it didn’t blink that everything but the HDDs was different. Just booted up and started the array and dockers. The JBOD functionality is great. Drive loss is just an excuse to add a bigger drive.

  • relaymoth@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    22 days ago

    I went the nuclear option and am using Talos with Flux to manage my homelab.

    My source of truth is the git repo with all my cluster and application configs. With this setup, I can tear everything down and within 30 min have a working cluster with everything installed automatically.

      • moonpiedumplings@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        21 days ago

        I have a similar setup, and even though I am hosting git (forgejo), I use ssh as a git server for the source of truth that k8s reads.

        This prevents an ouroboros dependency where flux is using the git repo from forgejo which is deployed by flux…

  • thirdBreakfast@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    21 days ago

    Proxmox on the metal, then every service as a docker container inside an LXC or VM. Proxmox does nice snapshots (to my NAS) making it a breeze to move them from machine to machine or blow away the Proxmox install and reimport them. All the docker compose files are in git, and the things I apply to every LXC/VM (my monitoring endpoint, apt cache setup etc) are all applied with ansible playbooks also in git. All the LXC’s are cloned from a golden image that has my keys, tailscale setup etc.

    • eli@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      21 days ago

      This is pretty much my setup as well. Proxmox on bare metal, then everything I do are in Ubuntu LXC containers, which have docker installed inside each of them running whatever docker stack.

      I just installed Portainer and got the standalone agents installed on each LXC container, so it’s helped massively with managing each docker setup.

      Of course you can do whatever base image you want for the LXC container, I just prefer Ubuntu for my homelab.

      I do need to setup a golden image though to make stand-ups easier…one thing at a time though!

        • eli@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          19 days ago

          Yes, essentially I have:

          Proxmox Baremetal
              ↪LXC1
                  ↪Docker Container1
              ↪LXC2
                  ↪Docker Container2
              ↪LXC3
                  ↪Docker Container 3
          

          Or using real services:

          Proxmox Baremetal
              ↪Ubuntu LXC1 192.168.1.11
                  ↪Docker Stack ("Profana")
                      ↪cadvisor
                        grafana
                        node_exporter
                        prometheus
              ↪Ubuntu LXC2 192.168.1.12
                  ↪Docker Stack ("paperless-ngx")
                      ↪paperless-ngx-webserver-1
                        apache/tika
                        gotenberg
                        postgresdb
                        redis
              ↪Ubuntu LXC3 192.168.1.13
                  ↪Docker Stack ("teamspeak")
                      ↪teamspeak
                        mariadb
          

          I do have a AMP game server, which AMP is installed in the Ubuntu container directly, but AMP uses docker to create the game servers.

          Doing it this way(individual Ubuntu containers with docker installed on each) allows me to stop and start individual services, take backups via proxmox, restore from backups, and also manage things a bit more directly with IP assignment.

          I also have pfSense installed as a full VM on my Proxmox and pfSense handles all of my firewall rules and SSL cert management/renewals. So none of my ubuntu/docker containers need to configure SSL services, pfSense just does SSL offloading and injects my SSL certs as requests come in.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    ·
    22 days ago

    Terraform and ansible. Script service configuration and use source control. Containerize services where possible to make them system agnostic.

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        20 days ago

        They’re good at different things.

        Terraform is better at “here is a configuration file - make my infrastructure look like it” and Ansible is better at “do these things on these servers”.

        In my case I use Terraform to create proxmox VMs and then Ansible provisions and configures software on those VMs.

    • xyx@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      22 days ago

      Out of curiosity: Are you running nix-ops with nix-secrets or how did you cover orchestration & credentials?

      • adf@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        21 days ago

        I use flakes and all hosts are configured from a single flake, where each host has its own configuration. I have some custom modules and even custom package in the same flake. I also use home manager. I have 4 hosts managed in total: home server, laptop, gaming PC, and a cloud server. All hosts were provisioned using nixos-anywhere + disko, except for the first one which was installed manually. For secrets I use sops-nix, encrypted secrets are stored in the same flake/repo.

  • Seefoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    22 days ago

    I use git and commit configs/setup/scripts/etc. to it. I at least have a road map for how to get everything back this way. Testing this can be difficult, but it really depends on what you care about really.

    • Testing my kopia backups of important data? that I manually test every once n’ while.
    • Testing if my ZFS setup script is 100% identical to my setup? that’s not that important, as long as I have a general idea I can figure out the gaps and improve the script for the next time around. Obviously, you can spend a lot more time ensuring scripts and what not stays consistent, but it depends on what you care about! For a lot of my service config, git has always worked well for me and I can go back to older configs if needed. You can get super specific here and save versions in git, then have something update the versions (e.g. WUD)
  • Nibodhika@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    20 days ago

    Ansible.

    I use docker for most of the services and Ansible to configure them. In the future I’ll migrate the server system to NixOS and might slowly migrate my Ansible to NixOS, but for the time being Ansible is working with relative ease.

  • 🇵🇸antifa_ceo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    22 days ago

    I got a bunch of docker compose files and the envs documented so its easy to spin things up again or rollback changes. It works well enough if I’m good about keeping everything all up to date and not making changes without noting it down for myself later.

  • ShellMonkey@piefed.socdojo.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    22 days ago

    Snapshots largely, most everything is VMs and docker containers. I have one VM set aside for dev work to test configs before updating the prod boxes as well.

  • yah@lemmy.powerforme.fun
    link
    fedilink
    English
    arrow-up
    11
    ·
    22 days ago

    With NixOS, you get a reproducible environment. When you need to change your hardware, you simply back up your data, write your NixOS configuration, and you can reproduce your previous environment.

    I use it to manage all my services.