DigitalDilemma

  • 0 Posts
  • 76 Comments
Joined 2 years ago
cake
Cake day: July 22nd, 2023

help-circle
  • I can understand that view, but I’ve personally experienced things where it absolutely can be this and I respectfully disagree with you. I think what OP describes is more likely to be hardware than the OS.

    Firstly - different drive for linux. A dying drive can freeze and take down its host, regardless of OS.

    Secondly, linux uses memory very differently to windows, especially in relation to caching the filesystem. Linux might be accessing memory that Windows doesn’t get to.

    We also don’t know what loads OP puts on his computer when running windows and linux. Maybe he has windows to game with, or may he uses linux for LLM/compute work and runs it full tilt. Each may do very different things and tax different aspects of the hardware.

    It’s simply not safe to assume anything when diagnosing intermittent problems with hardware. The only reliable method is methodical testing and isolation.



  • We did experiment with local models. They were okay, if a little slow with the resources we allocated for testing. Ultimately though, we paid for copilot. I’m still a little sceptical that it won’t leak data, despite the assurances, so I do clean anything sensitive before pasting.

    As for best models - generally gpt4 or 5 is my go-to, but the others have their uses. I tend to stick with one until it annoys me, then move on. Claude’s pretty good for code help, imo, but there’s not really a huge difference between them.

    What’s your experiences?


  • Sysadmin here, this is my usual flow for various distros

    1. as /u/FigMcLargeHuge mentions, recent logfiles in /var/log. Notably /var/log/messages (EL) and syslog (Debian) but anything that’s recent.

    2. journalctl - More and more things are moving to binary logging. If you know the process, then journalctl -u processname restricts to just that. also add a -f for tailing it for ongoing logs.

    3. dmesg -T - especially at system level, this captures any hardware/low level logs. (-T reports actual times, not just seconds since boot)

    4. Once you have some logs that you think are related, but don’t know WTF they actually mean, you have two options. The first is to google likely strings. This is… ineffective much of the time - accidental misinformation and outdated advice is increasingly common. The answer might be there, but it takes time and can be frustrating to weed out the cruft.

    The better way, (IMO, and people downvote me for saying this) is to use AI. Get a few lines of logs with the errors, check them for confidential information, and simply paste the suspect lines into chatgpt, gemini, claude, co-pilot, whatever. No need for context, it’ll figure that out. The LLM will, 4 times out of 5, identify the problem very quickly.

    Now, once it’s identified that, it will offer to fix it for you. This is where you’ve got to be on your toes as LLMs are really really quick to give bad advice at this level. But that first triage is nearly always worth doing and helps shape your own mind as to what’s going on. AI is still useful for fixing it, but do understand what it’s telling you to do.




  • It’s technology like this that I think will become more and more important as governments seek to restrict access to large parts of the internet. UK and Australia are forging ahead in censorship, and the EU is well on their way. The US already does some censorship, as do large parts of Asia and Russia.

    No matter the reason given, it’s always about control. So less easily censored technologies will be very useful for anyone that wants the ability to research truth, or at least, alternate points of view.


  • I’ve recently done almost exactly this, although I used an ESP8266 running esphome. That powers two 120mm fans that have various speed settings (including 0 rpm via PWM) depending on both the power state of various devices in the cupboard where it’s housed, as well as temperature. All speeds and controls are exposed to linux via the Home Assitant API, and of course that has its own alerts and dashboards. I wanted to run this fully independently of the machines its cooling.

    Not worth pursuing if you don’t already have an HA install, but if you do then perhaps worth a thought of a different approach.


  • I moved my wife’s laptop to Debian with Cinnamon as a desktop. She loves it and is as technophobic a person as I know…

    Auto login, automated-updates set up, remote backups. She just has to open the lid and firefox is there, which is 95% of what she wants. Libre office is around for the remaining 5%.

    This is someone who used to get angry at Windows forced updates and reboots, so not having any of that improved her quality of life.


  • No one said ‘different bad’,

    Plenty of people did. “What’s the point of change?” “I’m happy with Sys-V” “I don’t like Poettering”, “Lennart is too powerfull” and a lot more irrelevant and personal attacks.

    Please don’t accuse me of gaslighting whilst gaslighting me in return. I was there, I lived through the worst of the Debian wars and saw some great people leave the project, and a side of some friends that I really didn’t like. But that war is done and I have zero interest in continuing it so I’ll leave this here.









  • I work four days a week on a remote windows vm. It has everything I need, and I remote from /that/ onto whatever other vm I might need. I connect over a vpn using, well, anything. As you’ve pointed out, the local machine doesn’t need much in the way of specs, although in my case I have three monitors - all given over to the remote, and it’s a clean way to separate work’s environment and network from my own and it’s a very common work pattern. The hypervisor there is vmware, but that doesn’t matter.

    But… Gaming is a different. There is latency over the conn, and audio/graphic lag would make FPS and gpu-heavy games particularly poor. I don’t know of a way to totally overcome that, although game-streaming services exist, so presumably it is possible.