

If you wanted to get really in the weeds of ZFS, you can use ZFS send to send copies of your snapshots into a dataset that you store on your external.
You can enable encryption and compression on the external dataset as well.
This would use snapshots, give you the ability to make block-level incremental backups and allow encryption and compression using only ZFS tooling.
You’d have to script it though (it’s possible someone has already done this in some other backup application).





You can use the
zfs sendcommand to copy snapshots from one dataset to another.Your backup could be a ZFS dataset stored on an external drive(s) which would contain the snapshots of your online dataset. You could then encrypt and compress (by setting the appropriate ZFS dataset properties) the backup dataset for size efficiency, and security.
To restore the backup you would use
zfs sendto move your backedup snapshots into a new dataset on your new un-disastered hardware.Since this is all done via CLI, you could write a bash script to create periodic snapshots, one to backup snapshots to the external dataset and another to delete old snapshots in your dataset. Toss 'em in your cron service of choice (or use systemd timers) and you’ve got a whole ZFS native backup system.
There may be backup software that’ll do this for you. I’ve seen that Timeshift supports snapshot-based backups for btrfs so you can probably find a GUI app to handle the automation.