• 0 Posts
  • 32 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle

  • Oh, operators are absolutely the way for “released” things.

    But on bigger projects with lots of different pods etc, it’s a lot of work to make all the CRD definitions, hook all the events, and write all the code to deploy the pods etc.
    Similar to helm charts, I don’t see the point for personal projects. I’m not sharing it with anyone, I don’t need helm/operator abstraction for it.
    And something like cdk8s will generate the yaml for you to inspect. So you can easily validate that you are “doing the right thing” before slinging it into k8s.


  • Everyone talks about helm charts.
    I tried them and hate writing them.
    I found garden.io, and it makes a really nice way to consume repos (of helm charts, manifests etc) and apply them in a sensible way to a k8s cluster.
    Only thing is, it seems to be very tailored to a team of developers. I kinda muddled through with it, and it made everything so much easier.
    Although I massively appreciate that helm charts are used for most projects, they make sense for something you are going to share.
    But if it’s a solo project or consuming other people’s projects, I don’t think it really solves a problem.

    Which is why I used garden.io. Designed for deploying kubernetes manifests, I found it had just enough tooling to make things easier.
    Though, if you are used to ansible, it might make more sense to use ansible.
    Pretty sure ansible will be able to do it all in a way you are familiar with.

    As for writing the manifests themselves, I find it rare I need to (unless it’s something I’ve made myself). Most software has a k8s helm chart. So I just reference that in a garden file, set any variables I need to, and all good.
    If there aren’t helm charts or kustomize files, then it’s adapting a docker compose file into manifests. Which is manual.
    Occasionally I have to write some CRDs, config maps or secrets (CMs and secrets are easily made in garden).

    I also prefer to install operators, instead of the raw service. For example, I use Cloudnative Postgres to set up postgres databases.
    I create a CRD that defines the database, and CNPG automatically provisions all the storage, pods, services, config maps and secrets.

    The way I use kubernetes for the projects I do is:
    Apply all the infrastructure stuff (gateways, metallb, storage provisioners etc) from helm files (or similar).
    Then apply all my pods, services, certificates etc from hand written manifests.
    Using garden, I can make sure things are deployed in the correct order: operators are installed before trying to apply a CRD, secrets/cms created before being referenced etc.
    If I ever have to wipe and reinstall a cluster, it takes me 30 minutes or so from a clean TalosOS install to the project up and running, with just 3 or 4 commands.

    Any on-the-fly changes I make, I ensure I back port to the project configs so when I wipe, reset, reinstall I still get what I expect.

    However, I have recently found https://cdk8s.io/ and I’m meaning to investigate that for creating the manifests themselves.
    Write code using a typed language, and have cdk8s create the raw yaml manifests. Seems like a dream!
    I hate writing yaml. Auto complete is useless (the editor has no idea what format the yaml doc should take), auto formatting is useless (mostly because yaml is whitespace sensitive, and the editor has no idea what things are a child or a new parent). It just feels ugly and clunky.


  • So uplink is 500/500.
    LAN speed tests at 1000/1000.
    WAN is 100/400.
    VPN is 8/8.

    I’m guessing the VPN is part of your homelab? Or do you mean a generic commercial VPN (like pia or proton)?

    How does the domain resolve on the LAN? Is it split horizon (so local ip on the lan, public IP on public DNS)?
    Is the homelab on a separate subnet/vlan from the computer you ran the speed test from? Or the same subnet?



  • Servers: one. No need to make the log a distributed system, CT itself is a distributed system.

    The uptime target is 99%3 over three months, which allows for nearly 22h of downtime. That’s more than three motherboard failures per month.

    CPU and memory: whatever, as long as it’s ECC memory. Four cores and 2 GB will do.

    Bandwidth: 2 – 3 Gbps outbound.
    Storage:
    3 – 5 TB of usable redundant filesystem space on SSD or.
    3 – 5 TB of S3-compatible object storage, and 200 GB of cache on SSD.
    People: at least two. The Google policy requires two contacts, and generally who wants to carry a pager alone.

    Seems beyond you typical homelab self hoster, except for the countries that have 5gbps symmetric home broadband.
    If anyone can sneak 2-3gbps outbound pass their employer, I imagine the rest is trivial.
    Altho… “At least 2 [people]” isn’t the typical self hosting

    Edit:
    Tried to fix the copy/paste.

    Also will add:

    https://crt.sh/
    Has a list of all certificates issued.
    If you are using LE for every subdomain of your homelab (including internal), maybe think about a wildcard cert?
    One of those “obscurity isn’t security”, but why advertise your endpoints? Also increases privacy (IE not advertising porn(dot)example(dot)com)










  • my router and my reverse proxy (traefik) is able to receive the necessary SSL/TLS certificates however

    From something like LetsEncrypt?
    As an HTTP-01 Challenge? Not an DNS-01 challenge?
    Http challenge means that port 80 is accessible from the public internet (because that’s how LE can confirm it can reach your server via the public DNS records, proof of server ownership).
    DNS-01 is about proof of DNS record ownership, and doesn’t prove public internet access.

    Also, what are you self hosting?
    Does it really need to be publicly accessible? Or just accessible by you and people you trust?


  • You need to control a domain, so LE can verify you are the controller of the domain, then LE will issue you a certificate saying you are the controller of the domain.

    For a wildcard LE cert, you need to use the DNS challenge method.
    Essentially the ACME client (or certbot or whatever) will talk to LE and say “I want a DNS challenge for *.example.com”.
    LE will reply “ok, your order number 69, and your challenge code is DEADBEEF”.
    ACME then interacts with your public nameserver (or you have to do this manually) and add the challenge code as a txt record _acme-challenge.example.com. (I’ve been caught out by the fact LE uses Google DNS for resolution, and Google will only follow 1 level of NS records from the root authorative nameserver).
    All the while, LE is checking for that record. When it finds the record, it mints a wildcard certificate.
    ACME then periodically checks in with LE asking for order 69. Once LE has minted the cert, it will return it to acme.
    And now you have a wildcard cert.

    So, how to use it on a local domain?
    Use a split horizon DNS method.
    Ensure your DHCP is handing out a local DNS for resolving.
    Configure that local DNS to then use 8.8.8.8 or whatever as it’s upstream.
    Then load in static/override records to the local DNS.
    Pihole can do this. OPNSense/pfSense can do this. Unifi can do some of this.

    How does this work?
    Any device on your network that wants to know the IP of example.example.com will ask it’s configured DNS - the local DNS that you have configured.
    The local DNS will check it’s static assignments and go “yeh, example.example.com is 10.10.3.3”.
    If you ask you local DNS for google.com, it won’t have a static assignment for it, so it will ask it’s upstream DNS, and return that result.
    And it means you aren’t putting private IP spaces on public NS records.

    Then you can load in your wildcard cert to 10.10.3.3, and you will have a trusted HTTPS connection.

    Here is a list of LE clients that will automate LE certs.
    https://letsencrypt.org/docs/client-options/

    Have a read through and pick your desired flavour.
    Dig into the docs of that flavour, and start playing around.

    If it’s all HTTPS, consider using something like Nginx Proxy Manager (https://nginxproxymanager.com/) as a reverse proxy in front of your services and for managing the LE cert.
    It’s super easy to use, has a decent GUI, and then it’s only 1 IP to point all DNS records to.


  • DNS and domains are just human-friendly IP addresses.

    You only have 1 public IP address.
    So, to access different services you need to use different ports.
    Or run a service on a single port in front of the other services that can understand the connections and forward the connections to the actual services - known as a reverse proxy. In the case of http/https, there are plenty of reverse proxies that can direct requests based on all sorts of parameters, subdomains being one of them.

    If you are just starting out, I’d recommend a docker compose stack and Nginx Proxy Manager.
    Learning containers & docker makes everything easier.
    NPM is a very easy to use reverse proxy with a nice GUI, so you don’t have to configure CertBot/ACME or learn the specific config language of Nginx.

    If you are unsure of domains and all that, you can try it out for free.
    Your computer has a hosts file (/etc/hosts on Linux, I think it’s in system32 on windows). This allows you to tell the computer “for the domain example.com use the IP 10.0.0.200” or whatever you want. You need a hosts file entry for each subdomain.
    What this means is that you can run up a docker compose stack on your computer and point a bunch of sub domains to 127.0.0.1, use self-signed certs, and play around with nginx proxy manager and docker.
    No money spent, no records published, no traffic leaving your computer.
    Zero risk.

    There are loads of tutorials out there on NPM and docker compose stacks. Probably some close to your specific requirements.




  • accessed from the internet

    Accessed only by you and close family/friends who you are also hosting services for?
    Or accessed by anyone?

    “Accessed by anyone” carries more risk.

    “Accessed by users you host for”, the risks can be eliminated (well, other than risks from those users) by using a VPN. As in, only the people authorised to be on the VPN can access the services.
    Wireguard is the go-to these days.
    Tailscale is much easier and free for 3 users and 100 nodes.

    If it absolutely has to be “accessed by anyone” I would look into a “reverse proxy over VPN/tunnel” or just straight tunnel style approach like chisel (or crowbar, or corkscrew), rathole, frp, or cloudflare tunnels.

    Basically, don’t point a domain at your home public IP and don’t forward ports on your home router/firewall