

Agreed. Oddly enough, my Meshtastic contacts are much farther away than my farthest MeshCore contacts but MeshCore seems to be much livelier.


Agreed. Oddly enough, my Meshtastic contacts are much farther away than my farthest MeshCore contacts but MeshCore seems to be much livelier.


I fully switched to Linux in 2024, my last desktop Linux experience before that being at least five years prior.
On the other hand, I’m happier than expected with Wayland and PipeWire. They just work with little fuss. Sure, I’m a KDE user and Wayland is reportedly less fun outside the big DEs, but for me it just works.


Steam tends to have massive issues with permissions for games on NTFS partitions. You might’ve run into that.
GUI disk space analyzers are absolutely amazing.
For those who prefer KDE and/or donut graphs, Filelight has you covered.
I run Garuda because it’s a more convenient Arch with most relevant things preinstalled. I wanted a rolling release distro because in my experience traditional distros are stable until you have to do a version upgrade, at which point everything breaks and you’re better off just nuking the root partition and reinstalling from scratch. Rolling release distros have minor breakage all the time but don’t have those situations where you have to fix everything at the same time with a barely working emergency shell.
The AUR is kinda nice as well. It certainly beats having to manually configure/make obscure software myself.
For the desktop I use KDE. I like the traditional desktop approach and I like being able to customize my environment. Also, I disagree with just about every decision the Gnome team has made since GTK3 so sticking to Qt programs where possible suits me fine. I prefer Wayland over X11; it works perfectly fine for me and has shiny new features X11 will never have.
I also have to admit I’m happy with systemd as an init system. I do have hangups over the massive scope creep of the project but the init component is pleasant to work with.
Given that after a long spell of using almost exclusively Windows I came back to desktop Linux only after windows 11 was announced, I’m quite happy with how well everything works. Sure, it’s not without issues but neither is Windows (or macOS for that matter).
I also have Linux running on my home server but that’s just a fire-and-forget CoreNAS installation that I tell to self-update every couple months. It does what it has to with no hassle.


To quote that same document:
Figure 5 looks at the average temperatures for different age groups. The distributions are in sync with Figure 4 showing a mostly flat failure rate at mid-range temperatures and a modest increase at the low end of the temperature distribution. What stands out are the 3 and 4-year old drives, where the trend for higher failures with higher temperature is much more constant and also more pronounced.
That’s what I referred to. I don’t see a total age distribution for their HDDs so I have no idea if they simply didn’t have many HDDs in the three-to-four-years range, which would explain how they didn’t see a correlation in the total population. However, they do show a correlation between high temperatures and AFR for drives after more than three years of usage.
My best guess is that HDDs wear out slightly faster at temperatures above 35-40 °C so if your HDD is going to die of an age-related problem it’s going to die a bit sooner if it’s hot. (Also notice that we’re talking average temperature so the peak temperatures might have been much higher).
In a home server where the HDDs spend most of their time idling (probably even below Google’s “low” usage bracket) you probably won’t see a difference within the expected lifespan of the HDD. Still, a correlation does exist and it might be prudent to have some HDD cooling if temps exceed 40 °C regularly.


Hard drives don’t really like high temperatures for extended periods of time. Google did some research on this way back when. Failure rates start going up at an average temperature of 35 °C and become significantly higher if the HDD is operated beyond 40°C for much of its life. That’s HDD temperature, not ambient.
The same applies to low temperatures. The ideal temperature range seems to be between 20 °C and 35 °C.
Mind you, we’re talking “going from a 5% AFR to a 15% AFR for drives that saw constant heavy use in a datacenter for three years”. Your regular home server with a modest I/O load is probably going to see much less in terms of HDD wear. Still, heat amplifies that wear.
I’m not too concerned myself despite the fact that my server’s HDD temps are all somewhere between 41 and 44. At 30 °C ambient there’s not much better I can do and the HDDs spend most of their time idling anyway.
Given the usual quality of BIOS/UEFI option descriptions it’s remarkably close to being sensible. I would’ve expected something like “enables limiting CPUID maximum value”.


Yeah, the 13 feels a lot more solid. The 16 pays a certain price for its enhanced configurability. Honestly, though, a full-size touchpad module would go a long way to fixing that. The two spacers next to the keyboard look fine (if the keyboard is centered) but the touchpad spacers look less great.


I have a Framework 16. Is it as well-built, efficient, or quiet as a MacBook Pro? Nope. But if something breaks I can easily replace it, and I can upgrade it without having to throw everything away. Also, hot-swappable ports. That’s nice too.
It’s all about trade-offs in the end.
That does make encryption was less appealing to me. On one of my machines / and /home are on different drives and parts of ~ are on yet another one.
I consider the ability to mount file systems in random folders or to replace directories with symlinks at will to be absolutely core features of unixoid systems. If the current encryption toolset can’t easily facilitate that then it’s not quite RTM for my use case.


If you use a .local domain, your device MUST ask the mDNS address (224.0.0.251 or FF02::FB) and MAY ask another DNS provider. Successful resolution without mDNS is not an intended feature but something that just happens to work sometimes. There’s a reason why the user interfaces of devices like Ubiquiti gateways warn against assigning a name ending in .local to any device.
I personally have all of my locally-assigned names end with .lan, although I’m considering switching to a sub-subdomain of a domain I own (so instead of mycomputer.lan I’d have mycomputer.home.mydomain.tld). That would make the names much longer but would protect me against some asshat buying .lan as a new gTLD.
It is a well-designed system font. Say what you will about Microsoft but they do know how to make a good font or two.
There is one metric where Intel is better and that’s Thunderbolt. You typically get more full-featured Thunderbolt ports with an Intel CPU. Of course whether that point is relevant is highly dependent on your use case.
Ah, so they actually got that implemented. Nice.
Garuda for me. The reasons are similar; just replace some optimization with some convenience. It’s a bit garish by default but pleasant to use.
Flatpak has its benefits, but there are tradeoffs as well. I think it makes a lot of sense for proprietary software.
For everything else I do prefer native packages since they have fewer issues with interop. The space efficiency isn’t even that important to me; even if space issues should arise, those are relatively easy to work around. But if your password manager can’t talk to your browser because the security model has no solution for safe arbitrary IPC, you’re SOL.


Hoo boy, you weren’t kidding. I find it amazing how quickly this went from “the kernel team is enforcing sanctions” to an an unfriendly abstract debate about the definition of liberalism. I shouldn’t, really, but I still am.
Oh yeah, the equation completely changes for the cloud. I’m only familiar with local usage where you can’t easily scale out of your resource constraints (and into budgetary ones). It’s certainly easier to pivot to a different vendor/ecosystem locally.
By the way, AMD does have one additional edge locally: They tend to put more RAM into consumer GPUs at a comparable price point – for example, the 7900 XTX competes with the 4080 on price but has as much memory as a 4090. In systems with one or few GPUs (like a hobbyist mixed-use machine) those few extra gigabytes can make a real difference. Of course this leads to a trade-off between Nvidia’s superior speed and AMD’s superior capacity.
When you take away the garish KDE theme the gaming spin ships with it’s pretty much just an opinionated ready-to-go gaming Arch with a bunch of convenience tools. If that’s what you want then Garuda is pretty neat.