• 1 Post
  • 51 Comments
Joined 5 years ago
cake
Cake day: January 21st, 2021

help-circle
  • I’m also not familiar. But my understanding is that the package maintainers should prevent this situation. Because otherwise even if there are package version dependencies (I don’t actually know if pacman does this) it would just block the update which results in a partial update which isn’t supported. For example if your theoretical unmaintained Firefox blocks the update of libssl but Python requires new functionality you would be stuck in dependency hell. Leaving this problem to the users just makes this problem worse. So the package maintainers need to sort something out.

    It is a huge pain when it happens but tends to be pretty rare in practice. Typically they can just wait for software to update or ship a small patch to fix it. But in the worst case you need to maintain two versions of the common dependency. In lots of distros very common dependencies tend to get different packages for different major version for this reason. For example libfoo1 and libfoo2. Then there can be a period where both are supported while packages slowly move from one to the other.


  • IF no dependency tries to update too. Off course in that case I would stop. Without pacman -Sy, I never do that anyway, only -Syu.

    That’s all you need to know. As long as you always use pacman -Syu you will be fine. pacman -Sy is the real problem. The wiki page is pretty clear about the sequences of commands that are problematic https://wiki.archlinux.org/title/System_maintenance#Partial_upgrades_are_unsupported.

    Right? What i don’t understand is, when I uninstall with pacman -Rs firefox, delete the cached firefox package (only that file), then the system is in the same state as before I installed it. Then -S firefox should be okay, right? And it even looks up the new version.

    This isn’t correct. It won’t look up the new version. Assuming that the system was in a consistent state it will download the exact same package that you deleted. The system only ever “updates” when you run pacman -Sy. Until you use -y all packages are effectively pinned at a specific version. If the version that gets installed is different than the one you removed it probably means that you were breaking the partial update rule previously.


  • But that is my point. Just running pacman -S firefox is fine as long as you didn’t run pacman -Sy at some point earlier. It won’t update anything, even dependencies. It will just install the version that matches your current package list and system including the right version of any dependencies if they aren’t already installed.

    But that means if you already have Firefox installed it will do nothing.


  • I think you are a little confused at the problem here. The issue is that partial updates are not supported. The reason for this is very simple, Arch ensures that any given package list works on its own, but not that packages from different versions of the package list work together. So if Firefox depends on libssl the new Firefox package may depend on a new libssl function. If you install that version of Firefox without updating libssl it will cause problems.

    There is no way around this limitation. If you install that new Firefox without he new libssl you will have problems. No matter how you try to rules lawyer it. Now 99% of the time this works. Typically packages don’t depend on new library functions right away. But sometimes they do, and that is why as a rule this is unsupported. You are welcome to try it, but if it breaks don’t complain to the devs, they never promised it would work. But this isn’t some policy where you can find a loophole. It is a technical limitation. If you manage to find a loophole people aren’t going to say “oh, that should work, let’s fix it” it will break and you will be on your own to fix it.

    Focusing on your commands. The thing is that pacman -S firefox is always fine on its own. If Firefox is already installed it will do nothing, if it isn’t it will install the version from the current package list. Both of those operations are supported. Also pacman -Rs firefox && pacman -S firefox is really no different than just pacman -S firefox (other than potentially causing problems if the package can’t be allowed to be removed due to dependencies). So your command isn’t accomplishing anything even if it did somehow magically work around the rules.

    What is really the problem is pacman -Sy. This command updates the package list without actually updating any packages. This will enter you system into a precarious state where any new package installed or updated (example our pacman -S firefox command form earlier) will be a version that is mismatched with the rest of your system. This is unsupported and will occasionally cause problems. Generally speaking you shouldn’t run pacman -Sy, any time you are using -Sy you should also be passing -u. This ensures that the package list and your installed packages are updated together.


  • Reverse DNS is different than static IP.

    But yes for outbound email, if you can’t control reverse DNS you will have pain. (Inbound is totally fine) You can in theory just use whatever hostname the ISP’s reverse DNS resolves to however you will get some spam score (or be rejected) as it doesn’t match your “from” domain.

    Outbound email is a huge pain really no matter what. Unless you have a long-term lease on the IP and it isn’t in a bad network you really have to pay someone else if you want reliable delivery.


  • Its a problem but it isn’t a major problem. I am using rspamd without any sort of exotic configuration (basically just enabling things that are provided, not my own rules) and I only get a few spam messages leaking through a week. Maybe slightly worse than GMail but not considerably slow.

    IMHO the only real missing thing out of the box is contacts checking. Which is a huge thing because it is great to have reliable delivery from contacts. But my false-positive ratio is so low anyways that it isn’t a big issue and things like the known_senders module mostly mitigates it.


  • Yes, blocking port 25 outbound is incredibly common by default. Even on some server connections. It is probably better overall for exactly the reasons that you mentioned.

    Or just don’t self-host email

    IMHO this is a bit overblown. Hosting inbound is fairly easy. Mail senders (probably for the worst) are very forgiving even if your TLS cert is expired you will probably get mail. Plus senders are supposed to retry for days if you have downtime.

    However it is unfortunately true that due to spam sending is a huge pain because IPv4 reputation is a huge component. Sure you can get GMail to trust your domain after a month or so of sending if you have decent volume. But other providers who you may mail once a year are just going to go off of IP reputation. However email was basically designed for forwarding and you can use a service like AWS SES to forward your email from a trusted IP pretty easily. If you are low volume (like personal mail) there are tons of services that will do this for free.





  • I’m pretty surprised that all of the audio formats work. I’m not so surprised that the TV has h265, although maybe a bit surprised that it is exposed to the browser. The container support is also pretty surprising. Unless your MKVs are so simple that they are effectively WEBM.

    Or maybe it pops the link out of the browser into a dedicated media player which has decent codec support.

    iDevices do expose h265 in the browser, but the container support is still a bit surprising. But then again WEBM is basically MKV, so maybe that is why it tends to work.


  • There are a handful of common reasons.

    1. The client doesn’t support the formats. Browser clients are notoriously picky not supporting some common video (for example few browsers support h265 and it isn’t generally considered web-safe) and audio formats. But embedded devices may also cause trouble if they don’t have enough CPU to do non-accelerated playback and don’t have hardware support for the codec used.
    2. Playing at a lower bitrate. In that case you can transcode at the fly.
    3. Remuxing. This is things like the moov atom where the actual codecs are supported but not the container or exact packaging of the file.

    But yeah, especially if you are using a player with wide format support you may not need it.




  • IMHO this isn’t really worth it.

    1. x264 is very fast at lower profiles. Especially if you aren’t streaming across the internet often the size hit from the fast profiles is fine. Even if you are streaming over the internet it is probably fine. Getting a slightly faster CPU will also get you super far and is more useful to have lying around than a GPU as it will benefit most things that you do on the server. And worst-worst case a bit of CPU usage isn’t going to hurt much of the things that he is running (except maybe a game server if people are playing at the same time and you are really maxing out all of your cores).
    2. Integrated GPUs are fine for a handful of concurrent streams. Especially the Intel ones which have amazing media engines.
    3. Even if you are going for a dedicated GPU I would go with an Intel ARC. They are way better at media encoding and cost less.
    4. You can always add a GPU later. Wait until you have a need and are seeing problems without.

  • I still recommend it. I’m not fully happy with the situation but for now I consider it my best option.

    1. I consider Chromium-based browsers out of the question as they give too much power to Google. This is already showing to be a problem with new APIs and “features” that Google is pushing into the web platform and the bigger the market share gets the more control they have.
    2. Web browsers are the biggest attack surface that most people have. Displaying untrusted webpages and running untrusted code is incredibly difficult and vulnerabilities are regularly discovered. I don’t yet know a Firefox fork that I trust enough to reliably respond to security vulnerabilities quickly and correctly.

    So for now I am staying with raw Firefox. Not to mention that as a disto-built Firefox I have some insulation from Mozilla’s ToS. But I am very much considering some of the forks, especially the ones that are very light with patches and are mostly configuration tweaks.