Interface configuration and DNS resolution are managed by different systems. Their file structures are different. It’s been like this for many decades, and changing it is just not worth breaking existing systems.
I take my shitposts very seriously.
Interface configuration and DNS resolution are managed by different systems. Their file structures are different. It’s been like this for many decades, and changing it is just not worth breaking existing systems.


No numbers, no testimonials, or even anecdotes… “It works, trust me bro” is not exactly convincing.
If this is as significant an issue as you imply, please link some credible sources.
As far as I can tell, the “Chinese server” (or EU server) is just a public ID and Relay server, and necessary for the application to function unless a self-hosted server is used.
You can host the open-source ID and Relay servers for simple remote access at no cost. The pro subscription is mainly about account and device management.
services:
hbbs:
container_name: hbbs
image: rustdesk/rustdesk-server:latest
command: hbbs
volumes:
- ./data:/root
network_mode: "host"
depends_on:
- hbbr
restart: always
hbbr:
container_name: hbbr
image: rustdesk/rustdesk-server:latest
command: hbbr
volumes:
- ./data:/root
network_mode: "host"
restart: always
Mount the network share (fstab or mount.cifs), and pass the login using the username= and password= mount options. Then point the volume at the mount point’s path.
https://www.mattnieto.com/how-to-mount-an-smb-share-to-a-docker-container-step-by-step/


It’s possible that, when the ISP revokes the public address and assigns a new one, the DNS record isn’t updated immediately and still points to the old address. Then every new request would be sent to the old, invalid address.
And this is where I start shilling for Tailscale. It’s a Wireguard-based mesh VPN that is designed to work from behind firewalls, NAT, and CGNAT. It has its own internal split DNS provider, and probably some mechanism to handle public address changes that is transparent to the tunnelled traffic. You can use it to share the server with only the devices that have the client installed, or expose the server to the internet.
I’ve got it set up on my OPNSense firewall as a subnet router that advertises the subnet where my servers are, and often stream from Jellyfin over it. There’s some overhead, but it’s never been disruptive.


What sounds like gatekeeping is often a strongly worded emphasis on having the prerequisite knowledge to not just host your services, but do it in a way that is secure, resilient, and responsible. If you don’t know how to set up a network, set up a resilient storage, manage your backups, set up HTTPS and other encryption solutions, manage user authentication and privileges, and expose your services securely, you should not be self-hosting. You should be learning how to self-host responsibly. That applies to everything from Debian to Synology.
Friends don’t let friends expose their networks like Nintendo advises.


At work, we use PiSignage for a large overhead screen. It’s based on Debian and uses a fullscreen Firefox running in the labwc compositor. The developer advertises a management server (cloud or self-hosted) to manage multiple connected devices, but it’s completely optional (superfluous in my opinion) and the standalone web UI is perfectly usable.


You can absolutely use it without a reverse proxy. A proxy is just another fancy HTTP client that contacts the server on the original client’s behalf and forwards the response back to it, usually wrapped in HTTPS. A man in the middle that you trust.
All you have to do is expose the desired port(s) to all addresses:
# ...
- ports:
- 8080:8080
…and obviously to set the URL environment variables to localhost or whatever address the server uses.


I don’t know which feature you mean, can you link the documentation?


I used it for a while, and it’s a decent solution. Similar to Tailscale’s subnet router, but it always uses a relay and doesn’t do all the UDP black magic. I think it uses TCP to create the tunnel, which might introduce some network latency compared to Tailscale or bare Wireguard.
Right… my mistake, I guess I had SSH config entries in Termux and never questioned whether SSH was using those or DNS.
Still, try to find some way to check which server is being queried. It might reveal connectivity problems with the local DNS server.
Install Termux, then use either the dig or nslookup command to query the DNS name, and check which DNS server is queried. If it’s the private server’s address, you might be having connectivity issues. If it’s 100.100.100.100, the resolver is still trying to query Tailscale’s MagicDNS.
i3 has tabbed windows, and it stands to reason that Sway should have it too.
Hyprland has window groups: https://wiki.hypr.land/Configuring/Dispatchers/#grouped-tabbed-windows
Niri has a feature like that, but a little different since it’s a scrolling tiler. A column that contains two or more windows can be switched to tabbed mode, which displays one window at a time with full height, but you can’t have a tabbed group that is a member of a column, only full tabbed columns.
private dns setting of android
Probably. If that setting is enabled, Android (including Graphene) defaults to 8.8.8.8 if the higher-priority DNS servers (manual or received from DHCP) don’t support DNS-over-TLS or DNS-over-HTTPS.


Proxmox is my number one choice. It’s based on Debian, and has an excellent, extremely straightforward web UI for managing virtual machines and LXC containers.
It’s perfectly reasonable from the perspective of corporate scum: take away a standard feature, then sell it back as an extra. As far as I know, the modem still had UPnP for applications that rely on it.
No, I got it from the horse’s mouth: my WAN address was publicly routable all along, the ISP just disabled those NAT-related features remotely.
I finally got my ISP to enable bridge mode on my modem.
I also learned that I didn’t lose port forwarding and related services because I had been moved behind CGNAT or transitioned to IPv6 – they simply no longer offer port forwarding to residential customers. Ruminate on the implications of that statement so I’m not the only one with blood pressure in the high hundreds.
That sentence tells me that you either don’t understand or consciously ignore the purpose of Anubis. It’s not to punish the scrapers, or to block access to the website’s content. It is to reduce the load on the web server when it is flooded by scraper requests. Bots running headless Chrome can easily solve the challenge, but every second a client is working on the challenge is a second that the web server doesn’t have to waste CPU cycles on serving clankers.
POW is an inconvenience to users. The flood of scrapers is an existential threat to independent websites. And there is a simple fact that you conveniently ignored: it fucking works.