• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: December 14th, 2023

help-circle
  • Reading the takedown it sounds like just having the decryption code itself counts as “circumvention”. I feel like emulators would be a lot safer if game decryption was a separate codebase. Then the emulator would be free to develop while Nintendo plays whack a mole with tiny easily remade decryption repos that just take your key and the file and decrypt it.

    Hell ryujinx could even detect that you have the decryptor installed and automatically call it when needed, or it could be bundled during packaging but separate in development like how Bluray decryption is distributed in a separately library but can be used by for eg. VLC.









  • One related thing to watch out for is the state table size - one of my old cheap routers back in the day showed how full it was and it was hitting 100% a lot and seemed to grind the network to a halt when it did (I was in a house of 5 young people with lots of devices and multiple people torrenting behind a cheapo Netgear running ddwrt). That’s what lead me to switch to high end or x86 based routers. Being able to see the state table stats really helps to know how likely it is to be a problem, it’s so big when using opnsense on an x86 box that I don’t think it ever goes above 1% now.

    Edit: now that I think about it, if your VPN is working I wouldn’t expect any states related to peer connections to show up since your router won’t be NATing them, I guess I was just bold back in the day because it was a huge problem then.



  • Sounds likely, I haven’t used port forwarding with my VPN since Mullvad stopped supporting it, so when I recently shared my own torrent I paid for 1 month of a seedbox just to make sure it seeds well and the seedbox uploaded ~50GB while my local setup on a VPN without port forwarding only uploaded 1.8GB (and it hardly showed any peers as if nobody was trying to download). So it seems peers had a much easier time connecting to the seedbox.

    I have since setup port forwarding in gluetun for my local torrent client. I just wish there was more support for it because gluetun only has built in support for port forwarding for 2 providers (I guess automated requesting a forwarded port), and even then you still have to make your own script to automatically set the port in the torrent client when it’s assigned / changed. It’s possible that some providers do it more like Mullvad where you get assigned a port via the website that is tied to the VPN credentials, so you just have to plug the assigned port into the torrent client settings (that’s how it worked with Mullvad so I could just enter the port once and forget about it) but I haven’t checked other providers to see.


  • Partially yes, the tricky thing is that when using network_mode: "service:tailscale" (presumably on the caddy container since that’s what needs to receive traffic from the tailscale network), you won’t be able to attach the caddy container to any networks since it’s using the tailscale network stack. This means that in order for caddy to reach your containers, you will need to add the tailscale container itself to the relevant networks. Any attached containers will be connected as well.

    (Not sure if I misread the first time or if you edited but the way you say it is right, add the tailscale container to the proxy network so that caddy will also be added and can reach the containers)

    Here’s the super condensed version of what matters for connecting traefik/caddy to a VPN like wireguard/tailscale.

    • I left out all WG config since presumably you know how to configure tailscale
    • Left out acme / letsencrypt stuff since that would be different on caddy anyway
    • You may need to configure caddy to trust the tailscale tunnel IP of the machine on the other end that will be reverse proxying over the tunnel.
    • Traefik I guess requires you to specify the docker network to use to reach stuff, I just put anything that should be accessible into “ingress” as you can see. I’m not sure if my setup supports using a different proxy network per app but maybe caddy allows that.

    My traefik compose:

    services:
      wireguard:
        container_name: wireguard
        networks:
          - ingress
    
      traefik:
        network_mode: "service:wireguard"
        depends_on:
          - wireguard
        command:
          - "--entryPoints.web.proxyProtocol.trustedIPs=10.13.13.1" # Trust remote tunnel IP, the WG container is 10.13.13.2
          - "--entrypoints.websecure.address=:443"
          - "--entryPoints.websecure.proxyProtocol.trustedIPs=10.13.13.1"
          - "--entrypoints.web.http.redirections.entrypoint.to=websecure"
          - "--entrypoints.web.http.redirections.entrypoint.scheme=https"
          - "--entrypoints.web.http.redirections.entrypoint.priority=100"
          - "--providers.docker.exposedByDefault=false"
          - "--providers.docker.network=ingress"
    
    networks:
      ingress:
        external: true
    
    

    And then in a service’s docker-compose:

    services:
      ui:
        image: myapp
        read_only: true
        restart: always
        labels:
          - "traefik.enable=true"
          - "traefik.http.routers.myapp.rule=Host(`xxxx.xxxx.xxxx`)"
          - "traefik.http.services.myapp.loadbalancer.server.port=80"
          - "traefik.http.routers.myapp.entrypoints=websecure"
          - "traefik.http.routers.myapp.tls.certresolver=mytlschallenge"
        networks:
          - ingress
    
    networks:
      ingress:
        external: true
    
    

    (edited to fix formatting on mobile)


  • I’ve done something similar but I’m not sure how helpful my example would be because I use wireguard instead of tailscale and traefik instead of caddy.

    The principle is the same though, iirc I have my traefik container set to network_mode: “service:wireguard” so that the traefik container uses the wireguard container’s network stack. That way the traefik container also sees the wireguard interface and can receive traffic going to the wireguard IP. Then at the other end of the wireguard tunnel I can use haproxy to pass traffic to the wireguard IP through the tunnel and it automatically hits traefik.




  • Immich has a setting that does automatic photo backup over WiFi, I use the android app as a Google photos replacement. You can choose however many folders on your phone as you want (I just do camera roll) and enable only backup over WiFi and it backs up all the photos in original quality. I self-host the server on my Synology with a reverse proxy (can’t forward ports at my current place due to cgnat) so I can access it from anywhere.

    I believe the app is cross platform so the iPhone version should be identical to the android one.


  • Woah federation would be huge!

    Someday I would love to be able to share and receive shared photos / albums to and from users on different servers. Especially if it lets me sync the original files so that I can keep a copy in case their server goes down. It would also be neat if you could enable activitypub so that your account could show up as a fediverse user that people can follow for public or approved follower only posts, pixelfed compatibility would be super cool.


  • Keep in mind that if you set up raid using zfs or btrfs (idk how it works with other systems but that’s what I’ve used) then you also get scrubs which detect and fix bit rot and unrecoverable read errors. Without that or a similar system, those errors will go undetected and your backup system will backup those corrupted files as well.

    Personally one of the main reasons I used zfs and now btrfs with redundancy is to protect irreplaceable files (family memories and stuff) from those kinds of errors, as I used to just keep stuff on a hard drive until I discovered loads of my irreplaceable vacation photos to be corrupted, including the backups which backed up the corruption.

    If your files can be reacquired, then I don’t think it’s a big deal. But if they aren’t, then I think having scrubs or integrity checks with redundancy so that issues can be repaired, as well as backups with snapshots to prevent errors or mistakes from messing up your backups, is a necessity. But it just depends on how much you value your files.


  • My primary use case is safeguarding my important personal artifacts (family photos, digitized paperwork, encryption key / account recovery / 2FA backups) against drive failure (~2TB), followed by my decently sized Plex server (23TB), immich, nextcloud, and various other small things like selfhosted bitwarden, grocy, ollama, and stuff like that.

    I run all of my stuff off of a 6 bay Synology (more drives helps with capacity efficiency as double redundancy with 6 drives costs you 30% and I wanted to be protected against drive failures during rebuilding) with an Intel nuc on top to run plex/jellyfin transcoding using quicksync instead of loading the poor nas with cpu transcoding, I also run ollama on the nuc since it has faster cores than the nas.