• 0 Posts
  • 109 Comments
Joined 5 years ago
cake
Cake day: February 15th, 2021

help-circle

  • On the upside, the end user needs to use up less data for the same content. This is particularly interesting under 4G/5G and restrictive data plans, or when accessing places / servers with weak connection. It helps avoid having to wait for the “buffering” of the video content mid-playback.

    But yes, I agree each iteration has diminishing returns, with a higher bump in requirements. I feel that’s a pattern we see often.


  • It’s not like improper use of “steal” is unheard of, I see all the time people use “I’m gonna steal that” and similar even when it applies to things openly given for free. And considering that it’s quite clear that the MIT allows others to take without sharing back (it’s the main difference with GPL) I’m quite sure the commenter was aware that it wasn’t really theft, yet chose that word probably with the intention to insult the practice, rather than as a fair descriptor.

    So yes, you’re right, it isn’t theft… but I don’t think that was the point of the comment.



  • Compression and efficiency is often a trade-off. H266 is also much slower than AV1, under same conditions. Hopefully there will come more AV1 hw encoders to speed things up… but at least the AV1 decoders are already relatively common.

    Also, the gap between h265 and AV1 is higher than between AV1 and h266. So I’d argue it’s the other way around. AV1 is reported to be capable of ~30-50% bitrate savings over h.265 at the cost of speed. H266 differences with AV1 are minor, it’s reported to get a similar range, but more balanced towards the 50% side and at the cost of even lower speed. I’d say once AV1 encoding hardware is more common and the higher presets for AV1 become viable it’d be a good balance for most cases.

    The thing is that h26x has a consortium of corporations behind with connections and an interest to ensure they can cash in on their investment, so they get a lot of traction to get hw out.



  • It’s actually the lazy way. I only work once, then copy that work everywhere. The copying/syncing is surprisingly easy. If that’s what you call “package management” then I guess doing “package management” saves a lot of work.

    If I had to re-configure my devices to my liking every time I would waste time in repetition, not in an actual improvement. I configured it the way I liked it once already, so I want to be able to simply copy it over easily instead of re-writing it every time for different systems. It’s the same reason why I’ve been reusing my entire /home partition for ages in my desktop, I preserve all my setup even after testing out multiple distros.

    If someone does not customize their defaults much or does not mind re-configuring things all the time, I’m sure for them it would be ok to have different setup on each device… but I prefer working only once and copying it.

    And I didn’t say that bash is the only config I have. Coincidentally, my config does include a config.fish I wrote ages ago (14 years ago apparently). I just don’t use it because most devices don’t have fish so it cannot replace POSIX/Bash… as a result it naturally was left very barebones (probably outdated too) and it’s not as well crafted/featureful as the POSIX/bash one which gets used much more.


  • Manually downloading the same shell scripts on every machine is just doing what the package manager is supposed to do for you

    If you have a package manager available, and what you need is available there, sure. My Synology NAS, my Knulli, my cygwin installs in Windows, my Android device… they are not so easy to have custom shells in (does fish even have a Windows port?).

    I rarely have to manually copy, in many of those environments you can at least git clone, or use existing syncing mechanisms. In the ones that don’t even have that… well, at least copying the config works, I just scp it, not a big deal, it’s not like I have to do that so often… I could even script it to make it automatic if it ever became a problem.

    Also, note that I do not just use things like z straight away… my custom configuration automatically calls z as a fallback when I mistype a directory with cd (or when I intentionally use cd while in a far/wrong location just so I can reach faster/easier)… I have a lot of things customized, the package install would only be the first step.


  • It’s not only clusters… I have my shell configuration even in my Android phone, where I often connect to by ssh. And also in my Kobo, and in my small portable console running Knulli.

    In my case, my shell configuration is structured in some folders where I can add config specific to each location while still sharing the same base.

    Maybe not everything is general, but the things that are general and useful become ingrained in a way that it becomes annoying when you don’t have them. Like specific shortcuts for backwards history search, or even some readline movement shortcuts that apparently are not standard everywhere… or jumping to most ‘frecent’ directory based on a pattern like z does.

    If you don’t mind that those scripts not always work and you have the time to maintain 2 separate sets of configuration and initialization scripts, and aliases, etc. then it’s fine.





  • powershell, in concept, is pretty powerful since it’s integrated with C# and allows dealing with complex data structures as objects too, similar to nushell (though it does not “pretty-print” everything the way nushell does, at least by default).

    But in practice, since I don’t use it as much I never really get used to it and I’m constantly checking how to do things… I’m too used to posix tools and I often end up bringing over a portable subset of msys2, cygwin or similar whenever possible, just so I can use grep, sed, sort, uniq, curl, etc in Windows ^^U …however, for scripts where you have to deal with structured data it’s superior since it has builtin methods for that.


  • I prefer getting comfortable with bash, because it’s everywhere and I need it for work anyway (no fancy shells in remote VMs). But you can customize bash a lot to give more colored feedback or even customize the shortcuts with readline. Another one is pwsh (powershell) because it’s by default in Windows machines that (sadly) I sometimes have to use as VMs too. But you can also install it in linux since it’s now open source.

    But if I wanted to experiment personally I’d go for xonsh, it’s a python-based one. So you have all the tools and power of python with terminal convenience.



  • This.

    I haven’t tried Cosmic yet, but for me it’s the opposite: I feel GNOME (and KDE) UI is needlessly complex & bloated. Give me a simple tiling window manager that’s efficient, quick and always reliable. No real need for menus or fancy animated toolbar widgets, just snappy instant response to my keypresses.

    UX is as varied as people’s tastes, and they also might evolve with the times.



  • The desktop has been losing market for a while. I feel Windows is already under serious threat (if not already in the minority) when you think about all the devices that mainstream audiences orbit around (phones, tablets, portable consoles, etc), often using the Linux kernel. Only about a third of most website traffic comes from desktops.

    Many of the people who frequently use Windows desktop do so because of their job, and often avoid using it outside of work as much as possible, since it feels like… well, work.

    Microsoft has been desperately trying to appeal to those other bigger sectors of the pie and has failed every time.

    PC Gaming was one sector they had advantage on, yet that has already started to crumble thanks to Valve. I feel that MS will just try to push for integrating their xbox with Windows OS more and more…

    I feel it’s a battle with many fronts, since PCs have many uses… so MS is likely to run their typical spiel: copy what the competition are doing and try to centralize/integrate it with their OS in a way that gives them an advantage, as they are famous for doing.

    Another sector they can do this is with the WSL (Windows Subsystem for Linux)… they could turn Windows into a frontend for running Linux apps… so if Linux apps became popular, they could try to advertise Windows as the “best” way to run Linux software without losing the full first party support of legacy Windows software.


  • That can be true… but it depends on the change… emptying your bank account is a change that would make you poorer, and having all those who love you die would be a change that is likely to make you bitter (or at least, sad).

    Also, a lot of ancient software introduces change with relatively frequency… the Linux kernel itself is in constant change, introducing new features, despite it having very strict rules concerning backwards compatibility.

    The reason there was disagreement wasn’t about whether the new thing is good/bad just because it’s “New! Different!”… but about whether it was actually a good change or not.

    In the same way, just because nitro is the new init system in town (a change from the current status Quo) does not mean it necessarily is better/worse, right?

    Also, I remember that before systemd there was a lot of innovation when it comes to init systems… most distros had their own spin. And more diversity in components that now are part of systemd. I’d argue that ever since systemd became the de-facto standard, innovation in those areas has become niche. One could argue that there’s less change now, distros are becoming more homogeneous and more change-adverse in that sense.