• 2 Posts
  • 88 Comments
Joined 1 year ago
cake
Cake day: June 24th, 2024

help-circle

  • philpo@feddit.orgtoSelfhosted@lemmy.worldBeyond Pi-Hole
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 days ago

    I have expanded my setup over the years. And tbh, I reached so many stages where I read up how pi-hole or adguard achieved this and that. And every time it was like “damn,if you want more than the basics they are actually more complicated. I just have to look up this and this and Technitium does it by the book.”. That’s so refreshing.





  • philpo@feddit.orgtoSelfhosted@lemmy.worldTIL about Wiki.js
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    16 days ago

    Tbh: I haven’t found a really good replacement yet (we are simultaneously coming off confluence as well and that is even harder)

    What we tried:

    • Bookstack: I.can.not.understand.what.people.like.about It.Period. From my point of view it’s one of the worst systems on the market. Why? The fact that it only allows three different levels of hierarchy, the fact that by default all your images are public and their recommended solution is security by obscurity instead of proper handling it(which it can do) or their absolutely abhorent permission handling.

    • Xwiki: It’s… Clumsy. Possibly the most capable one, but it’s Java and munshes resources like they are free and it’s bothersome to setup/get working. Once it works it’s extremely capable,especially from a business point of view. It’s one of the close contenders for my confluence customers atm.

    • DokuWiki has become pretty capable,but takes a good theme and a few modules to be “up to modern standards”. The second close contender.

    • Another major contender is also BlueSpice. Will look into that next week.

    • Last but not least outline is also an idea. Currently looking into that.

    • For my personal reference,especially for everything self hosted I used to maintain a fairly extensive Wiki.js,but I have found it more and more bothersome as a split between the configuration assets and the wiki was always there. So nowadays it’s often more integrated and stringent to use my GIT repository (forgejo) to keep my documentation as well.

    • The same approach is also a nice one for my work and we still discuss if we might “make it work” with our project management (Redmine) and it’s wiki component.

    • Lastly for a personal wiki Tiddly might be enough, btw.



  • philpo@feddit.orgtoSelfhosted@lemmy.worldTIL about Wiki.js
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    16 days ago

    Yeah, as many said: It’s dead. I was heavily invested into Wiki.JS but cannot recommend it to anyone anymore due to the antics of the developer. Even if the mysterious new major version that should fix every issue comes out at some point, as long as the development policies don’t change it’s not worth it.

    I am currently actively moving everything away from it.




  • In terms of software: Agent NVR is imho currently one of the easiest and most compatible camera software systems available for free. Runs on a pi,even though I would absolutely not recommend one

    (Use a proper x64 SBC like the zimaboard. Makes a lot of things easier).

    Camera wise Dahua, Hikvison and Foscam are far better than Reolink, imho, but they most definitely need a separate network or a block so they don’t access the internet.



  • philpo@feddit.orgtoSelfhosted@lemmy.worldemergency remote access
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    I use an SXT, as I got it cheap, but the wap LTE kits, the LTAPs mini or the hap AX lite should do as well - softwarewise they are all the same anyway. (Just watch out for hardware without LTE modem card and be aware of the difference between LTE-M and LTE as in the knot.)

    Sometimes you find decent older ones on eBay as well.


  • philpo@feddit.orgtoSelfhosted@lemmy.worldemergency remote access
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    I use a cheap Mikrotik LTE Router as a second route. It has the smallest data plan my provider offers - but it’s enough for maintenance and if I need more due to the main line being faulty it’s the same provider’s fault and they pay the bill anyway.

    It mainly goes into the OPNsense as a second gateway,but it also allows me to VPN in and reboot the OPN if needed.

    If the OPN would be fucked totally in theory I could run the network directly over it,but that would be nasty.

    A friend of mine actually has a pretty nifty solution,but he is an absolute pro at these things. He has a small device (don’t ask me what SBC exactly) ping and check (I think DNS and a http check is included as well) various stages of his network, including his core switch, firewall and DSL modem. If one of them freezes the device sends a data packet via LoraWAN. He can then send a downstream command to reboot the devices.


  • I have central (water circuit based) heating with individual control per room. Additionally I have a weather station on my roof that tracks the sun and wind,temp, etc. and presence detectors in almost all rooms and electric blinds. The components are all KNX based, the logic part is home assistant based.

    Basically what we do: I have a “normal mode” that is supported by two addon modules. Normal mode means:

    • On schooldays the system tracks when school starts. If none is present in the kids rooms for more than 30min it assumes the kid is gone and goes into energy saving mode for that room (18 instead of 21). The system then looks when the kid is likely to come back and puts the room temperature up on time.

    • Our offices are always in energy saving temp and only get into normal temp once someone has been there for 15min or one of our computers is put on - both the wife and I work home office full time,but travel a fair bit.

    • The system tracks if our mobile phones are “pingable” locally. If they aren’t for 30min it assumes we are all gone and puts the whole house into “away” mode,including reducing the temperatures. Then it looks at our outlook calendars (and the school schedule) and puts the temperature back on as required.

    • Additonally a room that has a window open is always cut off from heating and the system sends a message when the outside temp is either too hot or too cold after a certain time.

    Additionally we have two prediction based module The system looks at three different weather predictions (my area is a bit of a problem for these) and creates a mean expected minimum and maximum day temperature.

    If the expected max and min is below a certain point it switches on “winter mode” - this means the system tries to keep the shutters up as much as possible and open them as early as possible (based on the sun position) so the house absorbs as much sun as possible. Doesn’t help that much,but at least a bit. Additionally the time for “open window notifications” is reduced.

    If the expected max is above a certain degree the system goes into summer mode. Then it’s basically vice-versa. The system tries to keep the blinds/shutters down as much as possible according to the position of the sun and opens them only after the sun has passed. That works fairly well and reduces the room temperature significantly - in the worst room around 3.8° on average. It also reminds the inhabitants to open windows in the morning when it’s still cold and close them in time.


  • Syncthing and nextcloud are not a good backup solution. Like ever. Potentially they aren’t even a backup solution at all. Or even cause data loss.

    You sadly didn’t tell us too much about what you are actually trying to backup and how your infrastructure looks like.

    If I understand you correctly you want to centralise the files that are currently hosted on a diverse set of devices into a central file storage on your server and backup from there. Right? That’s a fair goal and something I absolutely do myself - and both NextCloud as well as syncthing will help you make the files accessible for devices.

    Now,back to the backup part.

    You want basically three things from backup: They need to reliable (doesn’t help when you can’t access your files anymore because they are corrupted), you want them to be as unaffected by any potential risks as possible and let’s face it,you probably want them cheap. The second part basically dictates that for an online backup you want something that can do versioning so corrupted data (e.g. from ransomware) is not simply written over.

    My current approach is: I have an internal backup server (see below), an external backup in the cloud, and a cold storage backup in a bank safe. Sounds like a lot? We will see.

    Let’s look at cloud storage first. There are a multitude of solutions available for free with Duplicati, urBackup or goMFT being some fairly popular ones - I personally use Duplicati. These periodically scan the folders for changes, encrypt the files and send them to a cloud provider of your choice (e.g. an S3 bucket.) and to some extent can also do the versioning. (Although it’s safer to regulate that via a bucket policy as otherwise the application needs delete rights - which means in theory could delete all the data when compromised). Main benefit is the ease of access - you need to restore a single file? Done fast and easy. Not so much for a whole setup, restoring things can get quite expensive.

    If you use ZFS there is also the option to use ZFS sent to backup, but as there is currently no reliable European Union ZFS sent provider I am aware of (rsync.net does this,but is US based) legally cannot use them. So no experience on that.

    To backup clients completly and VMs/LXC it might also make sense to use a designated backup server,e.g. the proxmox backup server. These do require local (as in “where the PBS is running” storage, though, so a local PBS and a cloud storage behind doesn’t work. (There is a “hosted PBS” Service available, though from Tuxis. They work really well). But it can make sense to let a zimablade run a few old hard drives for a few hours a day for that.

    For offsite and online backup - as a full restore is always expensive and time consuming from the cloud- I also use two USB hard drives. One is always stored in a locker in a bank vault and every few months I change drive - so in case of a full server loss I only would need to restore the state of a (at max) 4 month old server via USB and then update stuff from the cloud for the 4 months after that.

    Now, to be extra sure I also burn the most important files (documents about the house,insurances,degrees,financial and tax data, healthcare records, photos of lifetime events, e.g. weddings, birthdays,births, graduations as well as “emergency data restore howtos”, password files, basically all the stuff I want to make sure my heirs/kids have access to if I die) on blue archive (important, not normal disks!) M-Discs. They are supposed to last far longer than normal blue rays and most consumer accessible media. These are stored locally,in the safe and at the court that holds our will. The reasons for that? Powered off hard drives lose data quite fast and if the wife and I perish at the same time, eg. because we have a car crash or the house burns down the issue is time: Cloud backup might not be available anymore as our bank accounts are frozen and therefore the backup is no longer paid for. The bank safe is not accessible for a long time for the same reason. When someone then accesses the USV drive it might be of no use. The server might be powered off or damaged. And sadly the legal system here can take years (up to 7 years are my planning times) before they can actually access the data.





  • philpo@feddit.orgtoSelfhosted@lemmy.worldDNS server
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    I absolutely second Technitium as well. That thing is rock solid, can be used for basically everything, has blocking with a multitude of options and does provide a nice graphical GUI.

    I have it running in a dual DNS setup (main server+a Zimablade nowadays) and that shit just works - it’s the container that has caused the least amount of problems in the last 3 years.

    The API is fairly handy and quite easy - I have it integrated into HomeAssistant so I have a “Disable DNS Blocking” button in my “Network control” tab in the app.

    The only downside is the fact that initially it can be quite overwhelming, especially if you are not an DNS guru and just did the step from AdGuard/PiHole - but soon you realise that you actually only need a few fields for basic operations.