• 0 Posts
  • 31 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • I’ve been using cryptpad.fr (the “flagship instance” of CryptPad) for years. It’s…fine. Really, it’s fine. I’m not thrilled with the experience, but it is functional and I’m not aware of any viable alternatives that are end-to-end encrypted.

    It’s based on OnlyOffice, which is basically a heavyweight web-first Microsoft Office clone. Set your expectations accordingly.

    No mobile apps, and the web UI is not optimized for mobile. I mean, it works, but does using the desktop MS Office UI on a smartphone sound like fun to you?

    Performance is tolerable but if you’re used to Google Sheets, it’s a big downgrade. Some of this is just the necessary overhead involved in an end-to-end encrypted cloud service. Some of it is because, again, this is a heavyweight desktop UI running in a web browser. It’s functional, but it’s not fast and it’s not pretty.


  • DNS over HTTPS. It allows encrypted DNS lookup with a URL, which allows for url-based customizations not possible with traditional DNS lookups (e.g. the server could have /ads or /trackers endpoints so you can choose what to block).

    DNS Over TLS (DoT) is similar, but it doesn’t use URLs, just IP addresses like generic DNS. Both are encrypted.



  • Honestly, that sounds great.

    My biggest problem with Flatpak is that Flathub has all sorts of weird crap, and depending on your UI it’s not always easy to tell what’s official and what’s just from some rando. I don’t want a repo full of “unverified” packages to be a first-class citizen in my distro.

    Distros can and should curate packages. That’s half the point of a distro.

    And yes, the idea of packaging dependencies in their own isolated container per-app comes with real downsides: I can’t simply patch a library once at the system level.

    I’m running a Fedora derivative and I wasn’t even aware of this option. I’m going to look into it now because it sounds better than Flathub.



  • But any 50 watt chip will get absolutely destroyed by a 500 watt gpu

    If you are memory-bound (and since OP’s talking about 192GB, it’s pretty safe to assume they are), then it’s hard to make a direct comparison here.

    You’d need 8 high-end consumer GPUs to get 192GB. Not only is that insanely expensive to buy and run, but you won’t even be able to support it on a standard residential electrical circuit, or any consumer-level motherboard. Even 4 GPUs (which would be great for 70B models) would cost more than a Mac.

    The speed advantage you get from discrete GPUs rapidly disappears as your memory requirements exceed VRAM capacity. Partial offloading to GPU is better than nothing, but if we’re talking about standard PC hardware, it’s not going to be as fast as Apple Silicon for anything that requires a lot of memory.

    This might change in the near future as AMD and Intel catch up to Apple Silicon in terms of memory bandwidth and integrated NPU performance. Then you can sidestep the Apple tax, and perhaps you will be able to pair a discrete GPU and get a meaningful performance boost even with larger models.



  • vd (VisiData) is a wonderful TUI spreadsheet program. It can read lots of formats, like csv, sqlite, and even nested formats like json. It supports Python expressions and replayable commands.

    I find it most useful for large CSV files from various sources. Logs and reports from a lot of the tools I use can easily be tens of thousands of rows, and it can take many minutes just to open them in GUI apps like Excel or LibreOffice.

    I frequently need to re-export fresh data, so I find myself needing to re-process and re-arrange it every time, which visidata makes easy (well, easier) with its replayable command files. So e.g. I can write a script to open a raw csv, add a formula column, resize all columns to fit their content, set the column types as appropriate, and sort it the way I need it. So I can do direct from exporting the data to reading it with no preprocessing in between.


  • My experience might be a bit outdated, but I remember finding the default Mac OS X Terminal extremely slow. A few years back I ran an output-heavy command, and the speed difference between displaying the output in terminal vs outputting it to a file was orders of magnitude. The same thing on my Linux system was much, much faster. I’m not sure how much of that was due specifically to rendering, vs memory management or something else, though.

    I might see if I can still reproduce this in Sequoia and if Ghostty is faster on Mac.


  • GenderNeutralBro@lemmy.sdf.orgtoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    3 months ago

    Laptops are a crapshoot, so I’d recommend sticking with distros that are known to support your specific model.

    Desktops should, in general, just work.

    That said, I’ve never personally had a seamless experience. There’s always something I need to struggle to configure. Usually it’s because I’m very picky and I like things to work MY way. The alternative on Widows would not be that it works my way; it would be that there’d be no way to do that so I’d just have to deal with it. If you’re willing to just roll with the defaults, then yeah, most basic things should just work.

    The biggest gotcha is GPU drivers. Not all distros ship with recent kernel versions with modern drivers. You should be pretty safe with Fedora and derivatives.








  • And you can’t tell when something is active/focused or not because every goddamn app and web site wants to use its own “design language”. Wish I had a dollar for every time I saw two options, one light-gray and one dark-gray, with no way to know whether dark or light was supposed to mean “active”.

    I miss old-school Mac OS when consistency was king. But even Mac OS abandoned consistency about 25 years ago. I’d say the introduction of “brushed metal” was the beginning of the end, and IIRC that was late 90s. I am old and grumpy.


  • Yeah, AMD is lagging behind Nvidia in machine learning performance by like a full generation, maybe more. Similar with raytracing.

    If you want absolute top-tier performance, then the RTX 4090 is the best consumer card out there, period. Considering the price and power consumption, this is not surprising. It’s hardly fair to compare AMD’s top-end to Nvidia’s top-end when Nvidia’s is over twice the price in the real world.

    If your budget for a GPU is <$1600, the 7900 XTX is probably your best bet if you don’t absolutely need CUDA. Any performance advantage Nvidia has goes right out the window if you can’t fit your whole model in VRAM. I’d take a 24GB AMD card over a 16GB Nvidia card any day.

    You could also look at an RTX 3090 (which also has 24GB), but then you’d take a big hit to gaming/raster performance and it’d still probably cost you more than a 7900XTX. Not really sure how a 3090 compares to a 7900XTX in Blender. Anyway, that’s probably a more fair comparison if you care about VRAM and price.