thanks, I had to restart to see the change. Why is a “text color” being used as background though
thanks, I had to restart to see the change. Why is a “text color” being used as background though
definitely doesn’t look as good as the GTK did, but I hope they simplify the interface at some point There’s no reason to keep these 3 menu areas, just bundle them together in tabs or one side menu to keep the hierarchy of players and effects under output/input.

edit: it’s actually 4 areas counting the preferences at the top right
mine has this brown color in some places and idk where I could change it in the KDE theme/settings



I think you need something like restic with a retention policy
https://restic.readthedocs.io/en/stable/060_forget.html#removing-snapshots-according-to-a-policy
--keep-{hourly,daily,weekly,monthly,yearly}
other solutions that implement similar policies are kopia and rustic
the advantage of using an off the shelf solution is that it’s almost certainly more reliable than what anyone can come up with in a few hours, and, it works with incremental backups, so your space requirements are drastically reduced depending how often you run it.


Intel has some GPUs that are more cost effective than NVIDIA’s when it comes to VRAM.
Arc A770 is selling for $370 in the US, and the new B50 for $399, both with 16GB.
B60 has 24GB, but I’m not sure where to find it.
Why are good features never made defaults? We can make it look almost like htop and it feels like the defaults couldn’t be worse. It’s such a waste to hide good features behind bad defaults.
Same, I have most of those, but selection with shift


everyday to once a month, depending how often I use the server
IME usually waiting longer to apply larger updates causes more issues than smaller and more frequent ones
lol the readme reads “a not so terrible” but the repo description reads like
A terrible web ui and RPC server for yt-dlp


yeah, I adopted it last year and I probably wouldn’t have picked it today. I’m glad that despite of that, in the end it’s just an S3 compatible storage and, thanks to that, it’s not too difficult to replace.


We do it for an immediate benefit not for some hypothetical apocalyptic scenario result of a half baked conspiracy theory.
It’s a bit like calling people who camp in the woods, fish, or rock climb “preppers” because these would be useful skills after the modern civilization.


this is one of the most misused templates
fwiw, I used Kopia for around a year, but eventually the backup got corrupted with a BLOB not found error and there was no way to fix it.
similar to this issue, except that nothing would fix or improve the situation https://github.com/kopia/kopia/issues/1087
and because it seemed to be an issue with the repo (not just with a snapshot), the remote copy was also borked. I couldn’t even list the snapshots.
I’ve since migrated to Rustic (though Restic might be more reliable today).
This seems to be the a similar issue too, but I was nowhere near the scale of this user. There are other similar reports that may or may not be linked to the same root cause, so it’s hard to say how rare this problem is.


deleted by creator


Isn’t that creating hardlinks between source and dest? Hard links only work on the same drive. And I’m not sure how that gives you “time travel”, as in, browsing snapshots or file states at the different times you ran rsync.
Edit: ah the hard link is between dest and the link-dest argument, makes more sense.
I wouldn’t bundle fs and backup compression in the same bucket, because they have vastly different reqs. Backup compression doesn’t need to be optimized for fast decompression.


yeah, more often than not I notice the bottleneck being the storage drive itself, not rsync.


yeah, it doesn’t, it’s just for file transfer. It’s only useful if transferring files somewhere else counts as a backup for you.
To me, the file transfer is just a small component of a backup tool.
def possible, cloudflare DDoS their own dashboard a few months ago with some react code
https://blog.cloudflare.com/deep-dive-into-cloudflares-sept-12-dashboard-and-api-outage/