Remember, RAID (or RAID-adjacent) is not a backup.
This. So much this. OP please listen to and understand this.
Even with full mirroring in RAID 1, it’s not a backup. Using the second drive as an independent backup would be so much better than RAID.
Remember, RAID (or RAID-adjacent) is not a backup.
This. So much this. OP please listen to and understand this.
Even with full mirroring in RAID 1, it’s not a backup. Using the second drive as an independent backup would be so much better than RAID.


You SHOULD NOT do software RAID with hard drives in separate external USB enclosures.
There will be absolutely no practical benefit to this setup, and it will just create risk of transcription errors between the mirrored drives due to any kind of problems with the USB connections, plus traffic overhead as the drives constantly update their mirroring. You will kill your USB controller, and/or the IO boards in the enclosures. It will be needlessly slow and not very fault-tolerant.
If this hardware setup is really your best option, what you should do is use 1 of the drives as the active primary for the server, and push backups to the other drive (with a properly configured backup application, not RAID mirroring). That way each drive is fully independent from the other, and the backup drive is not dependent on anything else. This will give you the best possible redundancy with this hardware.


I want the Centrino Nano Duo
Whatever you do, and whoever you end up working with, document document document. Take.notes.
And I mean on paper, in a notebook, something that can’t crash or get accidentally deleted and doesn’t require electricity to operate.
You’re doing this for yourself, not for a boss, which means you can take the time to keep track of the details. This will be especially important for ongoing maintenance.
Write down a list of things you imagine having on your network, then classify them as essential vs. desired (needs and wants), then prioritize them.
As you buy hardware, write down the name, model and serial number and the price (so that you can list it on your renter’s/homeowner’s insurance). As you set up the devices, also add the MAC and assigned IP address(es) to each device description, and also list the specific services that are running on that device. If you buy something new that comes with a support contract, write down the information for that.
Draw a network diagram (it doesn’t have to be complicated or super professional, but visualizing the layout and connections between things is very helpful)
When you set up a service, write down what it’s for and what clients will have access to it. Write down the reference(s) you used. And then write down the login details. I don’t care what advice you’ve heard about writing down passwords, just do it in the notebook so that you can get back into the services you’ve set up. Six months from now when you need to log in to that background service to update the software you will have forgotten the password. If a person you don’t trust has physical access to your home network notebook, you have a much more serious problem than worrying about your router password.
Because they want step-by-step guidance and support, and design help, and long-term support, not just a few questions answered.
This is a job. The kind of work that IT consultants get paid for. A fair rate would be US$100/hr, minimum, for an independent contractor.


You can just use openssl to generate x509 certificates locally. If you only need to do this for a few local connections, the simplest thing to do is create them manually and then manually place them in the certificate stores for the services that need them. You might get warnings about self-signed certificates/unrecognized CA, but obviously you know why that’s the case.
This method becomes a problem when:
I’ve used Letsencrypt to get certs for the proxy, but the traffic between the proxy and the backend is plain HTTP still. Do I need to worry about securing that traffic considering its behind a VPN?
In spite of things you may have read, and the marketing of VPN services, a VPN is NOT a security tool. It is a privacy tool, as long as the encryption key for it is private.
I’m not clear on what you mean by “between the proxy and the backend”. Is this referring to the VPS side, or your local network side, or both?
Ultimately the question is, do you trust the other devices/services that might have access to the data before it enters the VPN tunnel? Are you certain that nothing else on the server might be able to read your traffic before it goes into the VPN?
If you’re talking about a rented VPS from a public web host, the answer should be no. You have no idea what else might be running on that server, nor do you have control over the hypervisor or the host system.


Perfect explanation.
Thank you, I try. It’s always tricky to keep network infrastructure explanations concise and readable - the Internet is such a complicated mess.
People like paying for convenience.
Well, I would simplify that to people like convenience. Infrastructure of any type is basically someone else solving convenience problems for you. People don’t really like paying, but they will if it’s the most convenient option.
Syncthing is doing this for you for free, I assume mostly because the developers wanted the infrastructure to work that way and didn’t want it to be dependent on DNS, and decided to make it available to users at large. It’s very convenient, but it also obscures a lot of the technical side of network services which can make learning harder.
This kind of thing shows why tech giants are giants and why selfhosted is a niche.
There’s also always the “why reinvent the wheel?” question, and consider that the guy who is selling wheels works on making wheels as a full-time occupation and has been doing so long enough to build a business on it, whereas you are a hobbyist. There are things that guy knows about wheelmaking that would take you ten years to learn, and he also has a properly equipped workshop for it - you have some YouTube videos, your garage and a handful of tools from Harbor Freight.
Sometimes there is good reason to do so (e.g. privacy from cloud service data gathering) but this is a real balancing act between cost (time and money, both up-front and long-term), risk (privacy exposure, data loss, failure tolerance), and convenience. If you’re going to do something yourself, you should have a specific answer to the question, and probably do a little cost-benefit checking.


But if I’m reading the materials correctly, I’ll need to set up a domain and pay some upfront costs to make my library accessible outside my home.
Why is that?
So when your mobile device is on the public internet it can’t reach directly into your private home network. The IP addresses of the servers on your private network are not routable outside of it, so your mobile device can’t talk to them directly. From the perspective of the public internet, the only piece of your private network that is visible is your ISP gateway device.
When you try to reach your Syncthing service from the public internet, none of the routers know where your private Syncthing instance is or how to reach it. To solve this, the Syncthing developers provide discovery servers on the public internet which contain the directions for the Syncthing app on your device to find your Syncthing service on your private network (assuming you have registered your Syncthing server with the discovery service).
This is a whole level of network infrastructure that is just being done for you to make using Syncthing more convenient. It saves you from having to deal with the details of network routing across network boundaries.
Funkwhale does not provide an equivalent service. To reach your Funkwhale service on your private network from the public internet you have to solve the cross-boundary routing problem for yourself. The most reliable way to do this is to use the DNS infrastructure that already exists on the public internet, which means getting a domain name and linking it to your ISP gateway address.
If your ISP gateway had a static address you could skip this and configure whatever app accesses your Funkwhale service to always point to your ISP gateway address, but residential IP addresses are typically dynamic, so you can’t rely on it being the same long-term. Setting up DynamicDNS solves this problem by updating a DNS record any time your ISP gateway address changes.
There are several DynDNS providers listed at the bottom of that last article, some of which provide domain names. Some of them are free services (like afraid.org) but those typically have some strings attached (afraid.org requires you to log in regularly to confirm that your address is still active, otherwise it will be disabled).
They should be powered on if you want to retain data on them long-term. The controller should automatically check physical integrity and disable bad sections as needed.
I’m not sure if just connecting them to power would be enough for the controller to run error correction, or if they need to be connected to a computer. That might be model specific.
What server OS are you using? Are you already using some SSDs for cache drives?
Any backup is better than no backup, but SSDs are really not a good choice for long-term cold storage. You’ll probably get tired of manually plugging them in to check integrity and update the backups pretty fast.


OK, so what is a VPN?
A Virtual Private Network is a virtual network that lives on top of a physical network. In the case of the Internet, basically what happens is that your network traffic goes into the VPN on one side and comes out of the VPN provider’s network somewhere else, rather than out of your ISP’s network. All this really does is move any privacy concerns from your ISP to your VPN, which may or may not protect you from any legal inquiries.
For a more thorough explanation look here: https://www.howtogeek.com/133680/htg-explains-what-is-a-vpn/
Is it possible to use torrent without a VPN?
Certainly, however your torrent traffic will be visible to and inspectable by your ISP. If a copyright holder chooses to, they may sue your ISP for the personal information of the person whose IP address matches the illegal traffic that they found. After they have your personal information they can prosecute you directly. A VPN might shield against this by changing the apparent IP address associated with your torrent traffic, but then you are at the mercy of the VPN provider and the government of whichever country they operate in.
It should be noted that if you are not paying the bill for the Internet, and you use it for illegal activity, then the person you are putting at risk is the person who pays the bill. It’s their name attached to the ISP records.
If you are caught, or if they just don’t like torrent traffic on their network, the ISP may decide that you are simply too much trouble and it’s not worth keeping you as a customer, and just cut off your service (for your whole house).


Umm, but then your VPN leads to a server rented from a web host which you are paying with (presumably) a credit card, and if they’re reputable at all then you had to register with a government ID. The ones that don’t check ID are the ones that host ransomware gangs and CSAM distributors.
A VPN provides no privacy at all if it’s linked to an IP address or domain name and hardware that is registered to you.
Actual Budget is an open-source envelope-style budgeting tool similar to YNAB. It has a self-hostable syncing service so that you can manage your budget across multiple devices.
The reason you might want to do this is that it’s probably easier to do full account review sitting at your computer, but you might want to track expenses/receipts on your smartphone while you’re away from home.
Would this mini pc be a good homeserver
For what purpose?
Encrypting the connection is good, it means that no one should be able capture the data and read it - but my concern is more about the holes in the network boundary you have to create to establish the connection.
My point of view is, that’s not something you want happening automatically, unless you manually configured it to do that yourself and you know exactly how it works, what it connects to and how it authenticates (and preferably have some kind of inbound/outbound traffic monitoring for that connection).
Ah, just one question - is your current Syncthing use internal to your home network, or does it sync remotely?
Because if you’re just having your mobile devices sync files when they get on your home wifi, it’s reasonably safe for that to be fire-and-forget, but if you’re syncing from public networks into private that really should require some more specific configuration and active control.
My main reasons are sailing the high seas
If this is the goal, then you need to concern yourself with your network first and the computer/server second. You need as much operational control over your home network as you can manage, you need to put this traffic in a separate tunnel from all of your normal network traffic and have it pop up on the public network from a different location. You need to own the modem that links you to your provider’s network, and the router that is the entry/exit point for your network. You need to segregate the thing doing the sailing on its own network segment that doesn’t have direct access to any of your other devices. You can not use the combo modem/router gateway device provided by your ISP. You need to plan your internal network intentionally and understand how, when, and why each device transmits on the network. You should understand your firewall configuration (on your network boundary, not on your PC). You should also get PiHole up and running and start dropping unwanted inbound and outbound traffic.
OpSec first.


VPNs as a technology might not be illegal but circumventing the firewall certainly is.
Unless you are very vocal and high profile person no one will black bag you in a country of billion people, lol.
This is a bit of a misunderstanding about how things work in an authoritarian system. Sure, you might fly under the radar for awhile, but if you call attention to yourself (say, by getting caught trying to bypass the government firewall) and you are not high-profile, then it is very low-effort to make you disappear. Few will notice, and those that do will stay silent out of fear.
If you are more high-profile you still get black-bagged, you just get released after, with your behavior suitably modified.

Naomi Wu no longer uploads to YouTube.


Depends - how many family members do you have that the PRC might use against you? or who would miss you if the PRC black bagged you?
First and most important:
In the context of long-term data storage
ALL DRIVES ARE CONSUMABLES
I can’t emphasize this enough. If you only skim the rest of my post, re-read the above line and accept it as fundamental truth. “Long-term” means 1+ years, by the way.
It does not matter what type of drive you buy, how much you spend on it, who manufactured it, etc. The drive will fail at some point, probably when you’re least prepared for it. You need to plan around that. You need to plan for the drive being completely useless and the data on it unrecoverable post-failure. Wasting time and money to acquire the fanciest most bulletproof drives on the market is a pointless resource pit, and has more to do with dick-measuring contests between data-hoarders.
Knife geeks buy $500+ patterned steel chef’s knives with ebony handles and finely ground edges and bla bla bla. Professional kitchens buy the basic Victorinox with the plastic handle. Why? Because they actually use it, not mount it on a wall to look pretty.
The knife is a consumable, not an heirloom. So are your storage drives. We call them “spinning rust” for a reason.
The solution to drive failure is redundancy. Period.
Unfortunately, this reality runs counter to the desire to maximize available storage. Do not follow the path of desire, that way lies data loss and outer darkness. Fault-tolerant is your watchword. Component failure is unpredictable, no matter how much money you spend. A random manufacturing defect will ruin your day when you least expect it.
A minimum safe layout is to have 2 live copies of data (one active, one mirror), hot standby for 1 copy (immediate swap-in when the active or mirror fails), and cold standby on the shelf to replace the hot standby when it enters service.
Note that this does not describe a specific number of disks, but copies of data. The minimum to implement this is 4 disks of identical storage capacity (2 live, 1 hot standby, 1 on the shelf) and a server with slots for 3 disks. If your storage needs expand beyond the capacity of 1 disk, then you need to scale up by the same ratio. A disk is indivisible - having two copies of the same data on a disk does not give you any redundancy value. (I won’t get into striping and mucking about with weird RAID choices in this post because it’s too long already, but basically it’s not worth it - the KISS principle applies, especially in small configurations)
This means you only get to use 25% of the storage capacity that you buy. Them’s the breaks. Anything less and you’re not taking your data longevity seriously, you might as well just get a consumer-grade external drive and call it a day.
Buy 4 disks, it doesn’t matter what they are or how much they cost (though if you’re buying used make sure you get a SMART report from the seller and you understand what it means) but keep in mind that your storage capacity is just 1 of the disks. And buy a server that can keep 3 of them online and automatically swap in the standby when one of the disks fails. Spend more money on the server than the disks, it will last longer.
Remember, long-term is a question of when, not if.