this might be my next project. I need uptime management for my services, my VPN likes to randomly kill itself.
Just your normal everyday casual software dev. Nothing to see here.
People can share differing opinions without immediately being on the reverse side. Avoid looking at things as black and white. You can like both waffles and pancakes, just like you can hate both waffles and pancakes.
this might be my next project. I need uptime management for my services, my VPN likes to randomly kill itself.


I haven’t used a guide aside from the official getting started with syncthing page.
It should be similar to these steps though, I’ll use your desktop as the origin device.
Some things you may want to keep into consideration. Syncthing only operates when there are two devices or more that are online. I would recommend if you are getting into self hosting a server, having the server be the middle man. If you end up going that route these steps stay more or less the same, it’s just instead of sharing with the phone, its sharing with the server, and then moving to the server syncthing page and sharing with the mobile. This makes it so both devices use the server instead of trying to connect to each other. Additionally, if you do go that route, I recommend setting your remote devices on the server’s syncthing instance to “auto approve” this makes it so when you share a folder to the server from one of your devices, it automatically approves and makes a share using the name of the folder shared in the syncthing’s data directory. (ex. if your folder was named documents and you shared it to the server, it would create a share named “documents” in where-ever you have it configured to store data). You would still need to login to the server instance in the case of sharing said files to /another/ device, but if your intent was to only create a backup of a folder to the server, then it removes a step.
Another benefit that using the server middleman approach is that if you ever have to change a device later on down the road, you are only having to add 1 remote device to the server instance, instead of having to add your new device onto every syncthing that needs access to that device.
Additionally, if you already have the built in structure but it isn’t seeming like it is working, some standard troubleshooting steps I’ve found helpful:


I fall into this category. Went Nvidia back in 16 when I built my gaming rig expecting that I would be using windows for awhile as gaming on Linux at that point wasn’t the greatest still, ended up deciding to try out a 5700xt (yea piss poor decision i know) a few years later because I wanted to future proof if I decided to swap to linux. The 5700XT had the worst reliability I’ve ever seen in a graphics card driver wise, and eventually got so sick of it that I ended up going back to Nvidia with a 4070. Since then my life opened up more so I had the time to swap to Linux on my gaming rig, and here we are.
Technically I guess I could still put the 5700XT back in, and it would probably work better than being in my media server since Nvidia seems to have better isolation support in virtualized environments but, I haven’t bothered doing so, mostly because getting the current card to work on my rig was a pain, and I don’t feel like taking apart two machines to play hardware musical chairs.


Keepass is a great way of password management, I use keepass as well. I also use syncthing to sync my password database across all devices and then I have the server acting as the “always on” device so I have access to all passwords at all times. Works amazing because syncthing can also be setup so when a file is modified by another device, it makes a backup of the original file and moves it to a dedicated folder (with retention settings so you can have them cleaned every so often). Life is so much easier.
For photo access you can look into immich, its a little more of an advanced setup but, I have immich looking at my photos folder in syncthing on the server, and using that location as the source. This allows me to use one directory for both photo hosting and backup/sync


I hard agree with this. I would NEVER have wanted to start with containerized setups. I know how I am, I would have given up before I made it past the second LXC. Starting as a generalized 1 server does everything and then learning as you go is so much better for beginnings. Worst case scenario is they can run docker as the later on containerized setup and migrate to it. Or they can do what I did, start with a single server setup, moved everything onto a few drives a few years later once I was comfortable with how it is, nuked the main server and installed proxmox, and hate life learning how it works for 2 or 3 weeks.
Do i regret that change? No way in hell, but theres also no way I would recommend a fully compartmentalized or containerized setup to someone just starting out. It adds so many layers of complexity.


15% off a logitech device purchase for the complete removal of a 100$ smart switch. that’s a slap to the face “Thank’s for being a customer here’s a coupon you can only use if you continue being a customer”


I don’t think the term “Falls behind” is being used in a competitive entity vs entity in the way you read it as here.
I think it’s just being honest to the viewer in terms of hardware and software compatibility. Many go into the quest to swap to linux expecting that there will always be a replacement, and that’s simply not always true. Your biggest thing you should expect going into it is that it is not a 1:1 transition, your lifestyle and expectations the OS will provide will need to change and I think that was the general ideology that the author was trying to present.
Many move back to windows because they have incorrect expectations of what to expect out of transition because they either don’t like change, or don’t want to have to troubleshoot things that just worked on windows. Restructuring your life includes sacrifices that usually have to be made during the transition, and those sacrifices can include things that cost money to replace such as hardware peripherals. Some things are just misconfigurations and can be tweaked once you find out what to change. However, some things like the overall lack of support for an item you need to wait for support, replace that item completely which may or may not have an equivalent, or if you have the skillset required design your own interface for it.


As someone who went through the 12-13 upgrade last week. Do not go directly. I had to rollback twice during the 12-13 upgrade(thank you timeshift) so I couldn’t imagine trying to run an 11-13 upgrade.
I think my only real complaint about the deployment of this, is from a security standpoint. The password is hardcoded as “changeme” for the GitLab Runner container. which when run from an automated script like this the script itself doesn’t make the user aware of that. Like the script itself mentions that you should move credentials.txt but it never makes you aware of the hardcoded password.
it would be nice if it prompted for a password, or used a randomly generated one instead of that hardcode


I think everyone’s basically hit my complaints with Ubuntu. It’s a very bloated OS with a hard dedication into snaps, which I dislike(but I also hate flatpak so yea)
Being said if this is your first Linux distribution, you can’t go wrong with Ubuntu. It’s a very beginner-friendly distro. The only other one that I would have recommended aside from that would have probably been Mint. But Ubuntu is going to have quite a bit more tutorials and guides for it.


I synced immich to authentik post deployment no issue, but I believe my email matched. I don’t recall if I had to configure my user account ontop of the oauth settings or not, I believe it was smart enough to link the same email to the account.
If you are using a VM style deployment you could run a snapshot of the immich server ahead of time then just rollback if it fails. That’s what I do for all services when changing stuff.
I’m in this same boat as well. As someone who ran an XMPP server in the past, then stopped and eventually moved onto Matrix. I have to hard agree, in my experiences, XMPP was so much better administration side than having to deal with matrix, and its quite a bit more fleshed out(not to mention the sheer amount of clients available) Being able to just log into a management panel and have the panel do everything administration wise for me was super nice, instead of having to ask “is this only available via the API or is it available via a client or is this config only”, these types of tools from what I’ve seen don’t really exist for matrix.


I defo agree. Keep the domain for a few years, with the email server up still, but flag any emails from the server so you can go through and unsubscribe/change emails on anything using the old address.


I think they need to step back a little and address their quality control before they try to embrace the AI bandwagon on captions/subtitles.


I agree, I set my grandparents doors up on a timer, if its still open at 11 PM it auto closes both doors. I’ve got the ping a few times now saying “emergency door schedule activated” meaning that they were open and had not been closed prior.


I saw this the other day as well when i was looking at filebrowsers github looking into seeing if it had SSO support. It’s a shame really.


I remember that. They actually had quite a bit of trouble when they first tried to establish as an officially licensed company because of the fact that their initial user base was sailing the seas.


I don’t doubt it with some of the translations I’ve seen. I think it would be better for them to just release the main content and then release subtitles further on down the road, But I assume there’s probably some sort of accessibility law that forbids them from doing that.
It just gets super annoying watching a show and either having poor quality subtitles or subtitles that blatantly spoil parts of the series.
For example, in one piece
When you first meet Blackbeard, from memory, he doesn’t say who he is. He just stands there as an old drunkard. And you’re meant to expect that he’s just some crazy drunk person that’s interacting with the main party. You don’t actually find out who he is for a good 5-10 episodes. However, if you had subtitles on, they clearly label him as Blackbeard during the first encounter, so it ruins that entire revelation.
I use subtitles because I have ADHD, And as part of that, it makes it so I struggle to keep up with audio versus comprehending it and subtitles give me a short delay of being able to catch up and still be able to read the text to understand what happened. when the subtitles are broken, I end up hard focusing on that. or get lost requiring me to rewind. Super annoying.


I should clarify it depends on your definition of fan. When you’re making a derivative work, there’s two versions. There’s fan which is The person is enthusiastic about the content and then there is the intellectual property variation of it, which is someone who is doing it for non-commercial reasons under fair use(or said countries equivalent). However, once you start requiring money for said process, it removes the protections the creator has shielding it and generally changes the definition to that version.
Additionally, I agree a donation jar would be much better, but even then it’s been shown that that doesn’t resolve all liability because fan projects have been taken down for having a donation button even though the project itself is free, heck projects have been taken down for having advertisements on the projects website despite having nothing to do with said project
the implication of that is weird to me. I’m not saying that the horse is wrong, but thats such a non-standard solution. That’s implementing a CGNAT restriction without the benefits of CGNAT. They would need to only allow internal to external connections unless the connection was already established. How does standard communication still function if it was that way, I know that would break protocols like basic UDP, since that uses a fire and forget without internal prompting.