• 9 Posts
  • 89 Comments
Joined 1 year ago
cake
Cake day: September 13th, 2024

help-circle

  • My dream Linux gaming setup would be a fully configured isolated container that can be run on any host OS. Games are the prime candidates for containerization because they’re all proprietary, and there’s absolutely no reason a game needs user level permissions or to interact with any other program on the system.

    Imagine if you could just pull the OGC container from a public registry on your distro of choice, run your game, and then just shut it down when you’re done.

    I suspect the biggest barrier would be sufficiently low overhead GPU access though.


  • News flash: the things the Linux (open source in general) community fight about are also fought between developers of proprietary software. But you only see some of those fights because the others are either “trade secrets” you have to sign in blood not to reveal, or are in the form of corporate competition, sabotage, and lock-in instead of heated but usually still civil discussion where bridges and compatibility layers can still be built between even completely opposing camps.



  • I think having a TPM enables a number of worthwhile security features.

    But most of those security features place the TPM at the root of trust, something that is SEVERELY undermined by the fact that it is not open source, meaning it is inherently untrustworthy.

    Is it not the one chip we should demand and accept nothing less than complete openness in its implementation and complete control by the person who owns the device? I also think the types of protections it grants in theory are very good, but the fact that it’s proprietary means it’s terrible at actually granting you those protections.


  • Hmm, basically make a container with the VPN client and proxy server, and expose the proxy port through it? Not sure how to route the host server’s traffic through that but I suppose I can just point all the important stuff to the local container’s proxy port. I’ll see if that’s more reliable than modifying the host network configurations. Thanks!

    I’ve also been thinking of switching to Nix so I can just configure it once and rebuild the entire system with all the condigurations at any time without going through manually setting everything back up with individual commands/file edits. Though I’m not sure if that’d be more reliable given it’s broken randomly on Fedora when I didn’t even change any network configurations.





  • My biggest issue with Windows is the lack of control I have of the actual hardware I own. I don’t own my work computer to begin with nor am I entitled to have full control over it so it doesn’t matter.

    I do use WSL, but mainly because I’m more familiar with Bash than Powershell and don’t have to constantly figure out how Powershell does things I already know how to do.

    It’s the same reason I have no problem using my company’s OneDrive for work files when I go out of my way to avoid putting any of my personal data on the cloud. It’s their data and they don’t care so I don’t care either.

    It’s also nice because I can set up a Linux-only file server at home with things like SSHFS and the Windows computer can’t even see it since it has no SSH access doesn’t even support the network share protocol. If I had an SMB share it would show up on my work computer because it autodetects it.


  • parallel, easy multithreading right in the command line. This is what I wish was included in every programming language’s standard library, a dead simple parallelization function that takes a collection, an operation to be performed on the members of that collection, and optionally the max number of threads (should be the number of hardware threads available on the system by default), and just does it without needing to manually set up threads and handlers.

    inotifywait, for seeing what files are being accessed/modified.

    tail -F, for a live feed of a log file.

    script, for recording a terminal session complete with control and formatting characters and your inputs. You can then cat the generated file to get the exact output back in your terminal.

    screen, starts a terminal session that keeps running after you close the window/SSH and can be re-accessed with screen -x.

    Finally, a more complex command I often find myself repeatedly hitting the up arrow to get:

    find . -type f -name '*' -print0 | parallel --null 'echo {}'

    Recursively lists every file in the current directory and uses parallel to perform some operation on them. The {} in the parallel string will be replaced with the path to a given file. The '*' part can be replaced with a more specific filter for the file name, like '*.txt'.


  • Is it possible to use LUKS with a password with a Windows NTFS partition and just have GRUB decrypt it to let Windows boot? Don’t intend to dual boot Windows ever but just curious.

    Frankly I trust a password stored in my brain way more than whatever keys the TPM is storing. No way something being pushed this hard by Westoid tech corporations doesn’t have a backdoor that just unlocks everything for “approved” parties.







  • I tried using smartctl but it doesn’t seem to like the fact that it’s in a USB enclosure and says “unknown USB bridge”. Trying smartctl -d sat does give some SMART information and says that the “overall-health self-assessment test result” is passed for both based on “Attribute checks”, but I’m not sure if it’s actually passed or it just can’t see the actual failing information. It also says “SMART status not supported: Incomplete response, ATA output registers are missing” above the passed result which seems to indicate that it’s missing the information it needs for a full assessment.

    I run Pi-Hole and Ollama in containers, but neither have mount points or volumes on the hard drives, only the system SSD.

    One drive is a fairly new Seagate IronWolf Pro, but the other is a refurbished server hard drive so if one is dying it’s probably that one, though the stuff I actually care about is copied on both drives and a third one that’s offline and unplugged.

    The weird thing is that this only started happening when I reinstalled the OS, but like I said I reinstalled with newer version so that might be the cause? Maybe some disk/fs implementation changed and now does things automatically when the drives are idle that 42 didn’t do? But I feel like that would still trigger the indicators.

    My next step is probably to use inotify to look at file accesses, experiment with only mounting one drive at a time to see which one clicks or if they all do, maybe even connect the drives to another computer over SATA to do a full SMART check.

    Thank you!