• 0 Posts
  • 8 Comments
Joined 7 months ago
cake
Cake day: August 23rd, 2024

help-circle

  • The issue isn’t Docker vs Podman vs k8s vs LXC vs others. They all use OCI images to create your container/pod/etc. This new limit impacts all containerization solutions, not just Docker. EDIT: removed LXC as it does not support OCI

    Instead, the issue is Docker Hub vs Quay vs GHCR vs others. It’s about where the OCI images are stored and pulled from. If the project maintainer hosts the OCI images on Docker Hub, then you will be impacted by this regardless of how you use the OCI images.

    Some options include:

    • For projects that do not store images on Docker Hub, continue using the images as normal
    • Become a paid Docker member to avoid this limit
    • When a project uses multiple container registries, use one that is not Docker Hub
    • For projects that have community or 3rd party maintained images on registries other than Docker Hub, use the community or 3rd party maintained images
    • For projects that are open source and/or have instructions on building OCI images, build the images locally and bypass the need for a container registry
    • For projects you control, store your images on other image registries instead of (or in addition to) Docker Hub
    • Use an image tag that is updated less frequently
    • Rotate the order of pulled images from Docker Hub so that each image has an opportunity to update
    • Pull images from Docker Hub less frequently
    • For images that are used by multiple users/machine under your supervision, create an image cache or image registry of images that will be used by your users/machines to mitigate the number of pulls from Docker Hub
    • Encourage project maintainers to store images on image registries other than Docker Hub (or at least provide additional options beyond Docker Hub)
    • Do not use OCI images and either use VM or bare metal installations
    • Use alternative software solutions that store images on registries other than Docker Hub


  • I feel silly for not realizing that the SSH config would be used by Git!

    I thought if Forgejo’s SSH service listened to a non-standard port that you would have to do commands with the port in the command similar to below (following your example). I guess I assumed Git did not directly use the client’s SSH service.

    git pull git@git.mysite.com:1234:user/project.git
    

  • There are plenty of valid reasons to want to use a reverse proxy for SSH:

    • Maybe there is a Forgejo instance and Gitea instance running on the same server.
    • Maybe there is a Prod Forgejo instance and Dev Forgejo instance running on the same server.
    • Maybe both Forgejo and an SFTP are running on the same server.
    • Maybe Forgejo is running in a cluster like Docker Swarm or Kubernetes
    • Maybe there is a desire to have Caddy act as a bastion host due to an inability to run a true bastion host for SSH or reduce maintenance of managing yet another service/server in addition to Caddy

    Regardless of the reason, your last point is valid and the real issue here. I do not think it is possible for Caddy to reverse proxy SSH traffic - at least not without additional software (either on the client, server, or both) or some overly complicated (and likely less secure) setup. This may be possible if TCP traffic included SNI information, but unfortunately it does not.


  • people often seem to have a misinformed idea that the first item on your dns server list would be preferred and that is very much not the case

    I did not know that. TIL that I am people!

    Do you know if it’s always this way? For example, you mentioned this is how it works for DNS on a laptop, but would it behave differently if DNS is configured at the network firewall/router? I tried searching for more info confirming this, but did not find information indicating how accurate this is.


  • Depending on the network’s setup, having Pihole fail or unavailable could leave the network completely broken until Pihole becomes available again. Configuring the network to have at least one backup DNS server is therefore extremely important.

    I also recommend having redundant and/or highly available Pihole instances running on different hardware if possible. It may also be a good idea to have an additional external DNS server (eg: 1.1.1.1, 8.8.8.8, 9.9.9.9, etc.) configured as a last resort backup in the event that all the Pihole instances are unavailable (or misconfigured).


  • The steps below are high level, but should provide an outline of how to accomplish what you’re asking for without having to associate your IP address to any domains nor publicly exposing your reverse proxy and the services behind the reverse proxy. I assume since you’re running Proxmox that you already have all necessary hardware and would be capable of completing each of the steps. There are more thorough guides available online for most of the steps if you get stuck on any of them.

    1. Purchase a domain name from a domain name registrar
    2. Configure the domain to use a DNS provider (eg: Cloudflare, Duck DNS, GoDaddy, Hetzner, DigitalOcean, etc.) that supports wild card domain challenges
    3. Use NginxProxyManager, Traefik, or some other reverse proxy that supports automatic certificate renewals and wildcard certificates
    4. Configure both the DNS provider and the reverse proxy to use the wildcard domain challenge
    5. Setup a local DNS server (eg: PiHole, AdGuardHome, Blocky, etc.) and configure your firewall/router to use the DNS server as your DNS resolver
    6. Configure your reverse proxy to serve your services via domains with a subdomain (eg: service1.domain.com, service2.domain.com, etc.) and turn on http (port 80) to https (port 443) redirects as necessary
    7. Configure your DNS server to point your services’ subdomains to the IP address of your reverse proxy
    8. Access to your services from anywhere on your network using the domain name and https when applicable
    9. (Optional) Setup a VPN (eg: OpenVPN, WireGuard, Tailscale, Netbird, etc.) within your network and connect your devices to your VPN whenever you are away from your network so you can still securely access your services remotely without directly exposing any of the services to the internet