Back in the day it was nice, apt get update && apt get upgrade and you were done.

But today every tool/service has it’s own way to being installed and updated:

  • docker:latest
  • docker:v1.2.3
  • custom script
  • git checkout v1.2.3
  • same but with custom migration commands afterwards
  • custom commands change from release to release
  • expect to do update as a specific user
  • update nginx config
  • update own default config and service has dependencies on the config changes
  • expect new versions of tools
  • etc.

I selfhost around 20 services like PieFed, Mastodon, PeerTube, Paperless-ngx, Immich, open-webui, Grafana, etc. And all of them have some dependencies which need to be updated too.

And nowadays you can’t really keep running on an older version especially when it’s internet facing.

So anyway, what are your strategies how to keep sanity while keeping all your self hosted services up to date?

  • uenticx@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 hours ago

    Snapshots and for i in $hosts;do ssh -tt "sudo apt update -y && sudo apt upgrade -y";done

    For docker/k8s: argocd, helm, etc.

  • ken@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    24 hours ago

    A dedicated Forgejo instance f.example.com.

    For a small set of trusted “base” images (e.g. docker.io/alpine and docker.io/debian): A Forgejo Action on separate small runner, scheduled on cron to sync images to f.example.com/dockerio/ using skopeo copy.

    Then all other runners have their docker/podman configuration changed to use that internal forgejo container registry instead of docker.io.

    Other images are built from source in the Forgejo Actions CI. Not everything needs to be (or even should) be fully automated right off. You can keep some workflows manual while starting out and then increase automation as you tighten up your setup and get more confident in it. Follow the usual best practices around security and keep permissions scoped, giving them out only as needed.

    Git repos are mirrored as Forgejo repo mirrors, forked if relevant, then built with Forgejo Actions and published to f.example.com/whatever/. Rarely but sometimes is it worth spending time on reusing existing Github Workflows from upstreams. More often I find it easier to just reuse my own workflows.

    This way, runners can be kept fully offline and built by only accessing internal resources:

    • apt/apk repo mirror or proxy
    • synced base container images
    • synced git sources

    Same idea for npm or pypi packages etc.

    Set up renovate1 and iterate on its configuration to reduce insanity. Look in forgejo and codeberg infra repos for examples of how to automate rebasing of forked repo onto mirrors.

    I would previously achieve the same thing by wiring together more targeted services and that’s still viable but Forgejo makes it easy if you want it all in one box. Just add TLS.

    1: Or anyone have anything better that’s straightforward to integrate? I’m not a huge fan of all the npm modules it pulls in or its github-centric perspective. Giving the same treatment to renovate itself here was a little bit more effort and digging than I think should really be necessary.

  • ThunderComplex@lemmy.today
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    Since all my services are dockerized I just pull new images sporadically. But I think I should invest some time into finding automatic update reminders, especially when I have to hear about critical security updates from some random person on mastodon.

  • totoro@slrpnk.net
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    2 days ago

    Wow, that sounds like a nightmare. Here’s my workflow:

    nix flake update
    nixos-rebuild switch
    

    That gives me an atomic, rollbackable update of every service running on the machine.

  • mlfh@lm.mlfh.org
    link
    fedilink
    English
    arrow-up
    25
    ·
    3 days ago

    Everything I run, I deploy and manage with ansible.

    When I’m building out the role/playbook for a new service, I make sure to build in any special upgrade tasks it might have and tag them. When it’s time to run infrastructure-wide updates, I can run my single upgrade playbook and pull in the upgrade tasks for everything everywhere - new packages, container images, git releases, and all the service restart steps to load them.

    It’s more work at the beginning to set the role/playbook up properly, but it makes maintaining everything so much nicer (which I think is vital to keep it all fun and manageable).

    • Jeena@piefed.jeena.netOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      Yeah, For some reason I didn’t think of ansible even though I use it at work regularly. Thanks for pointing it out!

      • SayCyberOnceMore@feddit.uk
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        Just a word of caution…

        I try to upgrade 1 (of a similar group) manually first to check it’a not foobarred after the update, then crack on with the rest. Testing a restore is 1 thing, but restoring the whole system…?

    • BlackEco@lemmy.blackeco.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      I guess auto merge isn’t enabled, since there’s no way to check if an update doesn’t break your deployment beforehand, am I right?

        • BlackEco@lemmy.blackeco.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          Yes, but usually when you use automerge you should have set up a CI to make sure new versions don’t break your software or deployment. How are you supposed to do that in a self-hosting environment?

          • tofu@lemmy.nocturnal.garden
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Ideally, you have at least two systems, test updates in the dev system and only then allow it in prod. So no auto merge in prod in this case or somehow have it check if dev worked.

            Seeing which services are usually fine to update without intervening and tuning your renovate config to it should be sufficient for homelab imho.

            Given that most people are running :latest and just yolo the updates with watchtower or not automated at all, some granular control with renovate is already a big improvement.

    • tuhriel@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      16 hours ago

      That’s the theory…I usually have to first replace some packages that have been removed / renamed then homemanager is acting up because some stuff now is called differently. And every time it just shows me the one error that failed…

  • vegetaaaaaaa@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 days ago
    • use APT repositories when possible -> then unattended-upgrades
    • For OCI images that do not provide tagged releases (looking at you searxng…), podman auto-update
    • for everything else, subscribe to releases RSS feed, read release notes when they come out, check for breaking changes and possibly interesting stuff, update version in ansible playbook, deploy ansible playbook
    • Jeena@piefed.jeena.netOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 days ago

      Because you point to :latest and everything is dockerized and on one machine? How does it know when it’s time to upgrade?

      • Overspark@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 days ago

        Yeah only for :latest containers, that’s true. It automatically runs a daily service to check whether there are newer images available. You can turn it off per container if you don’t want it.

        One of the nice things about it is that I have containers running under several different users (for security reasons) so that saves me a lot of effort switching to all these users all the time.

          • prenatal_confusion@feddit.org
            link
            fedilink
            English
            arrow-up
            9
            ·
            edit-2
            2 days ago

            Depends on what you want to do. For production with sensitive data, yes it is. For my ytdl and jellyfin? Perfectly fine.

          • Overspark@piefed.social
            link
            fedilink
            English
            arrow-up
            5
            ·
            2 days ago

            Depends. There are a few things I update by hand, but as long as you have proper backups it’s generally safer to run the latest versions of things automatically if you don’t mind the possibility of breakage (which is very rare in my experience). This is in the context of self-hosting of course, not a business environment.

  • Fedegenerate@fedinsfw.app
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 days ago

    Fine, I’ll be the low bar.

    Proxmox, I just use the GUI to update

    I use community-scripts almost exclusively. Community-scripts cron lxc updater does the heavy lifting. pct enter [lxc]

    update

    does a bunch of work too.

    For Docker, I use a couple lxcs with Dockge on it, the “update” button takes me most of the rest of the way.

    Finally, I have a couple remote machines [diet-pi]. I haven’t figured out updating over tailscale yet, so I just go round semi frequently for the apt update && apt upgrade -y

    VMs get the apt update && apt upgrade -y too. I keep a bare bones mint VM as a virtual laptop, as I don’t have one. I’ll do what I need to do and if I had to install software I’ll just nuke the VM and go again from the bare bones template.

  • conrad82@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    I do it manually. update the container version and docker pull and run

    I have reduced the number of containers to ones i actually use, so it is manageable.

    i use v2 instead of v2.1.0 docker container tags if the provider don’t make too many bleeding edge changes between updates

  • FlowerFan@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    Arcane docker server checks for updates, notifies me when they’re available

    for security relevant stuff I just get notifications of new github releases