

BiglyBT (aka Vuze previously Azureus) has a decentralised chat feature akin to IRC. It’s not very active but it’s technically very interesting and still working. Rooms can be based on per-torrent, language, region, general purpose and even custom.
BiglyBT (aka Vuze previously Azureus) has a decentralised chat feature akin to IRC. It’s not very active but it’s technically very interesting and still working. Rooms can be based on per-torrent, language, region, general purpose and even custom.
It’s a crying shame more people didn’t adopt AceStream. It functioned exceptionally well, when the streaming sites put up links (hashes basically) instead of just http5 web streams. The fact it wasn’t open source probably didn’t help, but it did have a modified VLC client and was truly P2P.
TERF island or the 4th Reich
I get wanting to boycott Israelis due to what their government is doing, but you do realise blocking (the good-ish) half of America is gonna seriously hamper your ability to complete torrents? And if you think the UK is full of TERFs, you probably need to verify your news sources and statistical acumen. You shouldn’t always tar with the same brush…
Aside from this, on a technical merit this is gonna have absolutely zero effect - because in a torrent, you’re part of a mesh - those bits will instantly funnel through peers one way or another, and you’ll never convince enough of the swarm to have an impact. Presumably, the ‘content’ you’re torrenting aligns with your personal ethics anyway, and you’re not spreading hate. So why bother?
Complete and utter waste of time.
WTF! How did it hit target already? Hope there wasn’t any bot farming involved…
Otherwise, pleasantly surprised!
announced
What announcement? There’s been a new Personal Plus plan around for several months already - introduced without much fanfare, and simply brings the user count from 3 to 6 for a fixed small fee. Presumably this is due to feedback from personal users wanting to contribute something other than nothing.
Where do you see the free Personal plan has changed at all?
custom domain
From what I gather, this refers to the email address you sign up with.
If you use something like a non-gmail email address when signing up, it starts you off on the business plan with a trial (which you can instantly change to free). (Note: they’re gonna change this auto-detection thing with shared domains soon due to a security hole.)
I believe you can still use a custom domain (instead of the randomised *.ts.net provided one) with DNS lookups in your tailnet, on the personal (free) plan.
Do ignore me then, I assumed you might know the reference and only I mean’t it in good humour. :) (Without spoiling anything - in the unlikely event you might some day watch it - Mr Milchick is a character that uses ‘big words’. Your choice of words struck a chord.) I will say though, you’re seriously missing out. The cinematography alone is brilliant and the acting exceptional.
As you were, Mr Milchick.
Multiple backups may be kept.
Nice work, but if I may suggest - it lacks hardlink support, so’s quite wasteful in terms of disk space - the number of ‘tags’ (snapshots) will be extremely limited.
At least two robust solutions that use rsync+hardlinks already exist: rsnapshot.org and dirvish.org (both written in perl). There’s definitely room for backup tools that produce plain copies, instead of packed chunk data like restic and Duplicacy, and a python or even bash-based tool might be nice, so keep at it.
However, I liken backup software to encryption - extreme care must be taken when rolling and using your own. Whatever tool you use, test test test the backups. :)
Still using Private Internet Access (PIA).
Honestly, dunno why they’ve fallen out of fashion due to the FUD about being owned by an unsavoury parent company, but the most important matter to me is if they keep logs, which they don’t. One of the few VPN companies tested on this, in court, and in a recent audit. Plus still extremely cheap (if you go for 3yr+3mo).
Port forwarding works with with this docker NAS stack. Doesn’t use gluetun, but there’s a specialised docker-wireguard-pia container as part of the stack, with a script that handles port changes. Been flawless.
There’s no point doing anything fancy like that - wireguard over Tailscale is pretty pointless, as Tailscale is literally wireguard with NAT traversal and authentication bolted on. Unless you enable subnetting, it can’t get more secure than that.
And even if you do enable subnetting (which you might wanna do if you need access to absolutely everything), you can use Tailscale ACLs to keep tighter control - say, from specific (tagged) devices.
Won’t take that long before the enshittification is complete.
If you can’t find the original .torrent, one way to find it again is to use BiglyBT client’s Swarm Discoveries feature to search its DHT for the exact file size in bytes (of the main media file within). You may be able to find one or more torrents and simultaneous seed them with Swarm Merging too.
As well as the force recheck method others have mentioned, you can also tell BiglyBT to use existing files elsewhere when adding the torrent, which can copy the data onto there for you without risking overwriting the original files.
BiglyBT for manual dls on desktop, qBittorrent with the arrs
100% this. OP, whatever solution you come up with, strongly consider disentangling your backup ‘storage’ from the platform or software, so you’re not ‘locked in’.
IMO, you want to have something universal, that works with both local and ‘cloud’ (ideally off-site on a own/family/friend’s NAS; far less expensive in the long run). Trust me, as someone who came from CrashPlan and moved to Duplicacy 8 years ago, I no longer worry about how robust my backups are, as I can practice 3-2-1 on my own terms.
While you can do command line stuff with CloneZilla, I think what they’re referring to is the TEXT-based guided user interface, which doesn’t seem to differ much at all to the Rescuezilla GUI, which only looks marginally prettier. However, there’s a few other useful tools in there, and a desktop environments, so it’s still a bit nicer to use.
Yep, I guess it depends on how much data of interest is on the drive. You can hook it up to dmde with a ddrescue/OpenSuperClone-mounted drive, which can let you index the filesystem while it streams content to the backup image. It reads and remembers sectors already copied, and you can target specific files/folders so you don’t have to touch most of the drive.
Have 3x such WD Reds 3TBs with average ~100K hours power on each, 34.77 years total.
Spent most of that time in a HP Microserver N54L Windows 2012 R2 server with DrivePool, Scanner and SnapRAID. Now they’re in a custom build Proxmox in RAIDZ1. Have no intention of retiring them. :)
Device Model: WDC WD30EFRX-68AX9N0
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 179 178 021 Pre-fail Always - 6033 4 Start_Stop_Count 0x0032 098 098 000 Old_age Always - 2163 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 001 001 000 Old_age Always - 110229 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 123 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 35 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 2127 194 Temperature_Celsius 0x0022 115 088 000 Old_age Always - 35 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0