

Would you be able to share more info? I remember reading their issues with docker, but I don’t recall reading about whether or what they switched to. What is it now?
Just a stranger trying things.
Would you be able to share more info? I remember reading their issues with docker, but I don’t recall reading about whether or what they switched to. What is it now?
Regarding photos, and videos specifically:
I know you said you are starting with selfhosting so your question was focusing on that, but I would like to also share my experience with ente which has been working beautifully for my family, partner and myself. They are truly end to end encrypted, with the source code available on github.
They have reasonable prices. If you feel adventurous you can actually also host it yourself. They have advanced search features and face recognition which all run on device (since they can’t access your data) and it works very well. They have great sharing and collaborating features and don’t lock features behind accounts so you can actually gather memories from people on your quota by just sharing a link. You can also have a shared family plan.
To run the full 671B sized model (404GB in size), you would need more than 404GB of combined GPU memory and standard memory (and that’s only to run it, you would most probably want it all to be GPU memory to make it run fast).
With 24GB of GPU memory, the largest model which would fit from the R1 series would be the 32b-qwen-distill-q4_K_M (20GB in size) available at ollama (and possibly elsewhere).
Ollama is very useful but also rather barebones. I recommend installing Open-Webui to manage models and conversations. It will also be useful if you want to tweak more advanced settings like system prompts, seed, temperature and others.
You can install open-webui using docker or just pip, which is enough if you only care about serving yourself.
Edit: open-webui also renders markdown, which makes formatting and reading much more appealing and useful.
Edit2: you can also plug ollama into continue.dev, an extension to vscode which brings the LLM capabilities to your IDE.
I have numerous files which I am intentionally maintaining to improve seeding availability but I’ve always been bothered by how little they seed. Yet somehow while those same files are downloaded, seeding is great. Is this also a case of port forwarding being to blame? I do not have it enabled.
Alternatively, you don’t even need podman or any containers, as open-webui can be installed simply using python/conda/pip, if you only care about serving yourself:
https://docs.openwebui.com/getting-started/quick-start/
Much easier to run and maintain IMO. Works wonderfully.
Seems the chapter for Jellyfin has been “coming soon” for 3 years, too bad.
I’m not saying it’s not true, but nowhere on that page is there the word donation. And if it is, the fact that it is described and a license, tied to a server or a user causes a lot of confusion to me, especially when combined with the fact that there is no paywall but that it requires registration.
Why use the term license, server and user? Why not simply say donation and with the option of displaying the support by getting exclusive access to a badge like signal does?
Again, I’m very happy immich is free, it is great software and it deserves support but this is just super confusing to me and the buy.immich.app link does not clarify things nor does that blog post.
Edit: typo
Hi and thank you so much for the fantastic work on Immich! I’m hoping to get a chance to try it out soon, with the first stable release!
One question on the financial support page: is it not a donation? There is a per server and a per user purchase, but I thought immich was exclusively self hosted, is it not? Or is this more like a way to say thanks while giving some hints as to how immich is being used privately? Or is there a way to actually pay to have immich host a server for one?
Thanks for clarifying!
Tried calibre for the first time this week. Geez how simple it was to get it up and place my first eBook on my reader. It was all done in a matter of some few minutes, very intuitively. I didn’t even need to get any documentation open. Great tool!
What are the specs of your setup?
Oh sorry it was not obvious to me that this was a crosspost so I didn’t see the lengthy explanation provided! Indeed, my comment makes little sense, apologies.
This happens to me when using VGA and the connector isn’t well seated. Are you using an analog connector like VGA? Can you double check that the connector is well seated on both ends?
Thank you, I will dig into this to see if there’s something I’m missing, but I did use the same resources the poster did, but the thread may provide more information.
Thanks for the reply! Can you tell me more about what you mean with “check the efi grub install”?
Edit: to be clear, I have a vanilla initramfs booting properly, which is the one automatically built. I’m just trying to replicate it myself.
This is the way.
I explored whether this was a permission issue, but the permission is the same on the default and my initramfs:
mytestalpine:~# ls -l /boot/initramfs-*
-rwxr-xr-x 1 root root 10734241 Nov 27 22:56 /boot/initramfs-6.6.58-0-lts.img
-rwxr-xr-x 1 root root 17941160 Nov 3 17:39 /boot/initramfs-lts
I hear you, but how much time was Synology given? If it was no time at all (which it seems is what happened here??), that does not even give Synology a chance and that’s what I’m concerned with. If they get a month (give or take), then sure, disclose it and too bad for them if they don’t have a fix, they should have taken it more seriously, but I’m wondering about how much time they were even given in this case.
It’s about online games and anti cheat. Many companies will not allow anti cheat to work on Linux because they “require” kernel level anti cheat, a big security and privacy concern.
You can read more about anti cheat games and their compatibility with Linux here: https://areweanticheatyet.com/
Remove unused conda packages and caches:
If you are a Python developer, this can easily be several or tens of GB.