I have q xyz Domain as my main Domain. And there are basically no issues here. But in 1 or 2 places i noticed that the domain gets blocked, a couple of free/open wifis or a couple of DNS servers. But nothing major imho.
- 0 Posts
- 223 Comments
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•Colota 1.x - Open Source Android GPS Tracker with selfhosted backend supportEnglish
1·11 days agoBy default this applications allows when adding a server, that the communication is not encrypted between the app and the server. This should be configured by default to enforce TLS encryption. If someone would want to disable dis behavior and allow unencrypted communication, then this should take extra steps.
As i commented somewhere else, to say that since it is turned off it is secure by default, is like saying: “The SSH server is turned off by default so the configuration that comes with it does not need to be secure when shipped”
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•Colota 1.x - Open Source Android GPS Tracker with selfhosted backend supportEnglish
12·11 days agoThats like saying:
“The SSH Server configuration does not need to be secure because the SSH Server is turned off by default”
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•Colota 1.x - Open Source Android GPS Tracker with selfhosted backend supportEnglish
12·12 days agoYes, this is what we’re discussing… Are you a bot?
Obviously no. But you keep dodging the point here. And instead of comming up with an argument against my point, you seem to try to attack me personally.
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•Colota 1.x - Open Source Android GPS Tracker with selfhosted backend supportEnglish
12·12 days agoIn security and development there is a statement, called “secure by default”. That means the default settings are secure. This would encapsulate something like enforced Transport encryption.
Does this mean that the config can not be changed to fit the thread model? No.
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•Colota 1.x - Open Source Android GPS Tracker with selfhosted backend supportEnglish
2·12 days agoNot sure why you’ve chosen to be indignant about this particular implementation.
We are talking about a tracking App. Most selfhosted projects do not store such private data. You may can mage the argument for immich but only for ppl who take a picture every 5 min.
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•Colota 1.x - Open Source Android GPS Tracker with selfhosted backend supportEnglish
24·12 days agoIf the target server is compromised or taken by LEA the data is gone.
Laying the responsibility into the hands of the user is not ok for such an data aggregating service. Such highly critical, private and intime data should be protected and secure by default.
Not even transport encryption is enforced in the project. At first glance, http is allowed on local connections?!? Generate a self signed SSL cert on start and pin it in the app. Easy.
It is no excuse that other services do not follow these state of the art protection measures.
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•Colota 1.x - Open Source Android GPS Tracker with selfhosted backend supportEnglish
35·12 days agoI absolutely agree with you. Such private data should be End-To-End-Encrypted.
German: netcup.eu
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•Vigil - a self-hosted dashboard that watches your Docker imagesEnglish
171·19 days agoSorry, but you have posted only 1 sentence about the project and not even a link to the project.
Additional with the
scripts—basically “em dash” which is really popular among llm generated texts, i get a bad feeling about it.
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•Jellyfin critical security update - This is not a jokeEnglish
27·23 days agoIs it standard practice to release the security updates on GitHub?
Yes.
And then the maintainers of the package on the package repository you use will release the patch there. Completely standard operation.
I recommend younto read up on package repositories on Linux and package maintainers etc.
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•What do you use for your server administration?English
13·26 days agoThe cli.
I have used management interfaces like coxkpit in the last but i do not really like it that much. I have E-Mail Notifications setup for updates via aptitude and monitor using prometheus and grafana and get additional notifications via prometheus alarm manager.
For an easy to use docker interface i use dockge, since i found it in this use case to be faster with a good, working, independend Interface.
But for the Linux underneath, for all 10-20 servers i managae, CLI.
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•Caldav/carddav/webdav recommendations?English
3·1 month agoLooking for a simple card/cal/WebDAV server that runs in docker.
Or maybe just because there is nothing simple in hosting a mail server for having card and caldav.
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•Caldav/carddav/webdav recommendations?English
8·1 month agoMaybe try to explain the struggles you have had, since to my knowledge the posted options are the best and simplest once out there.
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•noob questions seeking non-noob answersEnglish
1·1 month agoThats exactly my point. Both are not. But you keep claiming synology is compared to others.
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•noob questions seeking non-noob answersEnglish
1·1 month agoI think you are missing the point how easy is to fuck things up in a console
No i think you are. Why should a beginner ever even touch the CLI? You can also SSH into the synology and fuck things up.
Using a ‘friendly environment’ like synology is not gurantee to not fuck things up.
Installing truenas when having no idea about almost anything is cumbersome, dealing with the millions options (some of them incompatible between them) is frustrating, cryptic error codes are discouraging…
What millions of options? You select a drive, and set a password and your done? 1 Set fewer then on synology.
You brought up TrueNas. TrueNas for example also gives you safe boundaries and suggestions how to set up things. Same as synology. There is literally also a setup wizard for backups.
AND AGAIN just because you follow the synology wizards does not mean your data is safe either. You always can fuck things up if you want to.
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•noob questions seeking non-noob answersEnglish
2·1 month agoI see your point but in this world there is only 2 options, or you have the skills, the knowledge and the time to do it by yourself, or you need to outsource it.
But your not, outsourcing it?! You just choose a proprietary provider for a docker compose file! and some raid configuration. Everything ia still on you to fuck up.
Assuming that the op is a real noob it is clear that the 2 first prerequisites are missing making that option unacceptable, then you can only go to the buy something easy enough for the general public.
Reading the Post again from OP, its clear that OP is clearly interessted in learning those things.
And in top of that, in a homelab, the most sacred thing is the data, not the service, the data. If you misconfigure a nas or the automated backup system it could lead into the worst scenario: the data is lost forever.
The exact same ia true for you synology NAS. + the limitations on how synology thinks you should do backups vs how it actually suits you.
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•noob questions seeking non-noob answersEnglish
7·1 month agoI would absolutely discourage the use of synology and probably any other brand in the NAS realm.
Synology has pulled of some really scummy things in the last few years with their certified SSDs where only a white list of SSDs could be used in an array or when they tried to push their own HDDa and show warnings and messengers to worry the user that something is wrong. Also they retroactively removed transcoding capabilities from their systems.
Those Systems are all quite limited for how expensive they are. They are great for just simple things but with the list OP posted, you would be heavily limited and have to jump through hoops in order to have a well functioning home lab/server.
ShortN0te@lemmy.mlto
Selfhosted@lemmy.world•noob questions seeking non-noob answersEnglish
1·1 month agoI’ve heard AMD’s onboard graphics are pretty good these days, but I haven’t tried AMD CPUs on a server.
The main issue is afaik still the software support, here are NVIDIA and Intel years ahead.
The benefit of going with a dGPU is that in a few years when for example maybe AV1 takes even more off, you can just switch the GPU and you’re done and do not have to swap the whole system. That at least was my thinking on my setup. My CPU, a 3600x is still good for another 10 years probably.
No one ever said that the new model would not be usefull. But Anthropic hyped it up to a 0-Day machine, who finds 0-Days in every project with easy and in places they could not have been found by humans.