

yea but he wouldn’t need to handle that, I do all his setup, he just has to click the shortcut that opens the game just like he does currently.
Just your normal everyday casual software dev. Nothing to see here.
People can share differing opinions without immediately being on the reverse side. Avoid looking at things as black and white. You can like both waffles and pancakes, just like you can hate both waffles and pancakes.


yea but he wouldn’t need to handle that, I do all his setup, he just has to click the shortcut that opens the game just like he does currently.


We all have been there. First technical build I struggled for 45 minutes trying to figure out why I was getting a zero display whatsoever only to find out that I plugged that damn HDMI cable into the wrong port, and the board had disabled everything including post and splash from using the motherboards port


you arent the only one. I had suck a painful onboarding process with next cloud from the docker setup to the speed of it to the UI that I just gave up and decided to use a combination of immich and syncthing instead.


My grandfather’s reason for it. “It will be too different from my current system”
… the only thing he does is the web browser, and bookworm deluxe which i have confirmed does work via wine. I was recommending him install an OS called q4os, which I have on my laptop, I showed him the side by side comparison of q4os vs windows. For a point of reference this is what q4os looks like 
I think he is too scared of change.


Fair, the first thing I teach anyone who gets a dualboot up and running, is how to install boot repair disk on a flash drive and how to run the system repair on it(easy enough since it autoruns). It fixes most basic BS that windows can do to a Linux install


I guess that really depends on the equipment though, some devices when you turn it on for the first time will automatically enter pairing mode, so all that had to be done is click it in the bluetooth menu, but it might not auto enter pairing mode when you turn it on after. So it’s unlikely the user ever knew they were pairing it, and just clicked through the prompts like many do
I somewhat agree with their mentality on post 2022 Debian since they had changed the default and made it harder to disable non-free from the start but, from what I understood by reading the FAQ page, even prior to bookworm it wasn’t endorsed due to having the toggle in the first place, which I find super weird.
It didn’t until 2022 or so, it’s had a toggle that can be turned on or off for non-free repo’s for as long as I can remember but, starting around 2022 they changed the default to allow for non-free (and also apparently made it a pain in the butt for the live install to disable it because its a boot param now instead of a toggle)
They actually explain why they don’t endorse Debian in the link the person above you added. Apparently since you /can/ enable the non-free repos in the installer, it doesn’t classify as 100% free. I don’t agree with the statement and find it weird, but that’s how they defined it.
It’s not just by default it seems, they excluded Debian because it had a toggle to be able to choose to add it during install(pre-2022), so it seems that their criteria is any type of affiliation with non-free software


this entire thing has made me really rethink whether I want to swap to the new repo or not.
Why was there no communication about it. The gplay repo maintainer wasn’t informed of anything, no public notice to anyone was given, just a transfer of the repo and a status issue here explaining it.
Obviously the act is genuine as they were able to keep the original keys but like, this entire system seemed really sketchy.
I’m also not happy with the fact that it seems the first thing they added was removing checksums, but that might be a temp thing.
I also just noticed that it looks like they removed the entire public key for it, which if they had the original private keys using the existing public keys shouldn’t be an issue right?


One of my drives crippled itself a few days back, not sure what caused it. Wasn’t able to be resolved without a host restart which was unfortunate. SMART isn’t failing and has been working fine, so I’m chalking it down to a weird Proxmox bug or something.
For sure expected I was going to need to do a rollback on an entire drive after that restart though. Still may have to if it reoccurs.
I have Proxmox Backup Server backing up to an external drive nightly, and then about every 2 or 3 weeks also backup to a cold storage which I store offsite. (this is bad practice I know but I have enough redundancies in place of personal data that I’m ok with it).
For critical info like my personal data I have a sync-thing that is syncing to 3 devices, so for personal info I have roughly 4 copies(across different devices) + the PBS + potentially dated offsite.


despite recommendations, I run PBS along side the standard server barebone. I don’t store the backups on the same system they are stored to an external drive (which gets an offline copy every once and awhile) but I don’t like the idea of having PBS in a virtual environment, it’s just another layer that could go wrong in a restore process.
the implication of that is weird to me. I’m not saying that the horse is wrong, but thats such a non-standard solution. That’s implementing a CGNAT restriction without the benefits of CGNAT. They would need to only allow internal to external connections unless the connection was already established. How does standard communication still function if it was that way, I know that would break protocols like basic UDP, since that uses a fire and forget without internal prompting.
this might be my next project. I need uptime management for my services, my VPN likes to randomly kill itself.


I haven’t used a guide aside from the official getting started with syncthing page.
It should be similar to these steps though, I’ll use your desktop as the origin device.
Some things you may want to keep into consideration. Syncthing only operates when there are two devices or more that are online. I would recommend if you are getting into self hosting a server, having the server be the middle man. If you end up going that route these steps stay more or less the same, it’s just instead of sharing with the phone, its sharing with the server, and then moving to the server syncthing page and sharing with the mobile. This makes it so both devices use the server instead of trying to connect to each other. Additionally, if you do go that route, I recommend setting your remote devices on the server’s syncthing instance to “auto approve” this makes it so when you share a folder to the server from one of your devices, it automatically approves and makes a share using the name of the folder shared in the syncthing’s data directory. (ex. if your folder was named documents and you shared it to the server, it would create a share named “documents” in where-ever you have it configured to store data). You would still need to login to the server instance in the case of sharing said files to /another/ device, but if your intent was to only create a backup of a folder to the server, then it removes a step.
Another benefit that using the server middleman approach is that if you ever have to change a device later on down the road, you are only having to add 1 remote device to the server instance, instead of having to add your new device onto every syncthing that needs access to that device.
Additionally, if you already have the built in structure but it isn’t seeming like it is working, some standard troubleshooting steps I’ve found helpful:


I fall into this category. Went Nvidia back in 16 when I built my gaming rig expecting that I would be using windows for awhile as gaming on Linux at that point wasn’t the greatest still, ended up deciding to try out a 5700xt (yea piss poor decision i know) a few years later because I wanted to future proof if I decided to swap to linux. The 5700XT had the worst reliability I’ve ever seen in a graphics card driver wise, and eventually got so sick of it that I ended up going back to Nvidia with a 4070. Since then my life opened up more so I had the time to swap to Linux on my gaming rig, and here we are.
Technically I guess I could still put the 5700XT back in, and it would probably work better than being in my media server since Nvidia seems to have better isolation support in virtualized environments but, I haven’t bothered doing so, mostly because getting the current card to work on my rig was a pain, and I don’t feel like taking apart two machines to play hardware musical chairs.


Keepass is a great way of password management, I use keepass as well. I also use syncthing to sync my password database across all devices and then I have the server acting as the “always on” device so I have access to all passwords at all times. Works amazing because syncthing can also be setup so when a file is modified by another device, it makes a backup of the original file and moves it to a dedicated folder (with retention settings so you can have them cleaned every so often). Life is so much easier.
For photo access you can look into immich, its a little more of an advanced setup but, I have immich looking at my photos folder in syncthing on the server, and using that location as the source. This allows me to use one directory for both photo hosting and backup/sync
I’m just chiming in to say that while the documentation gives you information on how to do external access, there are multiple issues open on the github about unauthenticated endpoints that if you know what is on the server already, you can confirm that it’s there
So I wouldn’t use a standard naming convention because using that knowledge, someone who cares could use common names that could be on the server, followed by common standards of formats they would be in, and be able to confirm it’s their via the end points.