I’ve worked with bookstack. Found it easy and intuitive. It’s wiki software not specific to family history.
The lie made into the rule of the world - Ezekiel 23:20
I’ve worked with bookstack. Found it easy and intuitive. It’s wiki software not specific to family history.
I think it depends on the rate of change, rather than the amount of containers.
At home I do things manually as things change maybe 3 or 4 times a year.
Professionally I usually do setup automated devops because updates and deployments happen almost daily.
At one of my clients, who wants everything on-prem, I use gitlab CI with ansible. It took 3 days to setup, and requires thinkering. But all in all, I like the versitility, consistency and transparency of this approach.
If I’d start over again, I’d use pyinfra instead of ansible, but that’s a minor difference.
Stock raspberry os and syncthing sounds like the easiest way to do this.
I also dislike graphana kabana elastic behemot.
You can use rsyslog to centralize the logs. Then there’s tools like this for anomaly detection on those logs.
A small application I wrote myself, hosted on the free tier of pythonanywhere.com
Uptime monitoring and notifications
I’ve done cron @reboot keep-one-running <mycommand> before (1)
My guess is tracker is outdated
Great news, thanks!
Hopefully the android releases will be able to follow, as I understand the original syncthing authors will no longer be supporting android.
Walnut size? I’d recommend the s2m package, squirrel to marmot. Ask for doctor Nutcase
It helps build a decentralized network, usefull as most governments are turning further autocrat.
Aah, ISP’s NAT. Yes, in that context, it’s correct that you can’t port forward.
Perhaps you can STUN through, but unlikely to get a good port.
Port forwarding was invented for exactly that
Static IP is helpfull but not necessary. Even with NAT and a changeing IP there’s options, such as:
Quick, but sadly incorrect
It makes things easier, but you have options, such as:
One of these projects might be of interest to you:
https://github.com/Mintplex-Labs/anything-llm
https://github.com/mudler/LocalAI
Do note that CPU inference is quite a lot slower than GPU or the well known SAAS providers. I currently like the quantized deepseek models as the best balance between quality of replies and inference time when not using GPU.