• 0 Posts
  • 40 Comments
Joined 3 years ago
cake
Cake day: August 6th, 2023

help-circle
  • CMR performs better under all workload types.

    Shingled Magnetic Recording overlays the tracks on top of each other like roof shingles. This means you can fit more tracks on the same platter which means you can fit more data. Unfortunately this also means whenever writing data you have to rewrite tracks adjacent to the track you’re rewriting which leads to a lot of reshuffling of data which leads to very slow writes when this is taking place (say you edit a file or replace it, delete some and copy over others).

    SMR allows more storage for less money but it takes a serious performance hit (right now about the largest CMR disks you can get as an example are about 28TB in size, by contrast you can get 40TB SMR disks so it can significantly amplify storage). It shouldn’t be used for many scenarios. For archival backup it’s fine. For disks that are having data changed on them anywhere near regularly it’s not great.

    I want to underline that for USB powered portable 5200RPM disks they’re already slower disks when CMR, so as SMR they get a lot slower in write performance (one I had would drop down to sustained low 20MB/s write speeds when over 60% full). A 7200RPM SMR disk with proper 12V power from a PSU rail or an AC adapter would likely be double that at worst by contrast.

    So SMR has its uses, it has its place. It’s just a lot of people who don’t know might use it in places where CMR is more appropriate and would give them a better experience. So by all means if you’re using SMR in back-up disks to your primary ones to create back-up snapshots that are updated infrequently continue to do so, they’re fine for that especially if the tasks are done on machines that can be left running for days while the data is slowly written.





  • All that would happen absolute worst case scenario if MS breaks this is your users would get a whining complaint about not being activated. Get a small “Activate Windows” logo stuck in the lower right hand of their screen and would lose the ability to change wallpapers, customize windows colors, etc.

    To be clear it wouldn’t break the install and it would leave it in a state in which you could use an updated version of MAS (reminder MAS supports multiple activation options) to fix it remotely.



  • If you’re going intel you can check the ark.intel pages for the processors in the devices you’re looking at. Intel does pretty good documentation so it’ll show you what integrated graphics they have and all that.

    Ideally you want a chip that can do hardware decoding (and if possible encoding if you’re serving media to others and intend for it to transcode and not direct-play) of common codecs so you’re not eating a massive power bill or generating tons of heat or getting bogged down in resource utilization.

    AV1 support is the only tricky part when it comes to hardware decode support. Maybe you don’t use it yourself but typically only the newer chips support hardware decode of AV1 files. Something to consider if that’s likely to be an issue for you if you have or plan to have lots of AV1 encoded files. (Though there is software decode of course)

    The Intel N150 can do a 4K desktop, you won’t be doing 4k gaming on it at all but it can do the desktop and video playback and is a low power consumption chipset. Should be able to support at least 2-3 4k transcodes as well. A lot of enthusiasts use it for just this purpose in fact and it’s fairly snappy for uses like these.

    Anything more powerful than an N150 will be fine as well for 4K video viewing, transcoding, 4k desktop, etc. So if you want to spend more and get a more powerful Intel chip you can. Just avoid 13/14th generation i series (i5/i7/i9) especially used because of the hardware damage bad design did to those and there are a lot of messed up ones floating around from people trying to offload.

    144hz may be the really tricky part. Lots of these mini boxes are capped at 60hz so definitely double-check that. There’s always the option of displayport to HDMI cables too if it has a DP output that supports the necessary 4k framerate. N150 might struggle driving that to be honest.

    Oh and be aware of thermal throttling. Lots of manufacturers stuff Ultra 9 series in things like laptops and minis with inadequate cooling and they thermal throttle like crazy so you pay $800 and get something with the same performance as a properly cooled Ultra 7 or 5 series.

    To loop back around to whether you need a dedicated GPU. You have to ask yourself are you transcoding streams for others or is it mostly direct-play without transcode? Integrated GPU on the CPU die should be good enough unless you have an awful lot of streams going at once or some other pressing need.

    You can run whatever distro you want. There are extremely specialized distros like OSMC (https://osmc.tv/) which is basically kind of like Kodi running on Debian but without a desktop environment (extremely media center focused).


  • If the drive previously wasn’t making this noise (as in it had been filled with data, been in use for days-weeks and wasn’t ever making this noise) and it doesn’t happen in response to data writes (even hours after the fact) then it might be a cause for concern that the drive could be dying.

    In general it’s a good idea to have back-ups of any important data but I’d really ensure that’s the case here and assume it could imminently fail. In general the sound of hard drives changing (that is sounding different in either idle noises or active writing/reading noises) is a cause for concern for potential drive failure though it could be other things and as drives age they can sometimes change sound signatures as mechanical components age without necessarily failing (could go on working fine for years).

    That said there are normal processes in drives that can make noise:

    • Some sort of operation driven by your OS itself, I won’t begin to get into all of them but there could be something accessing things in the background, doing file table or journaling operations, writes, checks, etc on the file system itself, just low level maintenance stuff.

    • SMR drives may continue to write and shuffle data for quite some time after being written to, especially if it was a large amount of data. Though this should still even in the case of multiple terabytes probably be resolved within 12 hours.

    • Many drives, especially high capacity enterprise drives do make a -soft- clicking sound as a result of the arms sweeping the surface when idle but not off to if I recall correctly spread around lubricant or some sort of basic mechanical maintenance. It’s part of the normal drive operations. It’s possible it occurs more frequently in response to a massive amount of writes previously like filling a drive or may not be activated until a certain amount of data is written, I’m not really sure how that works as that would probably be proprietary information to the manufacturer.

    Should I be worried about this? To my paranoid mind it feels like something is slowly reading my files with some exploit to bypass the indicator light to fly under the radar.

    How would it do this? Is it installing hacked firmware to your enclosure too? I doubt you’re that valuable of a target.

    If you’re worried about malware then back up your stuff, nuke the install and reinstall from scratch. I wouldn’t worry about it if this is the only thing you’re seeing and find it unlikely.


  • Majestic@lemmy.mltoLinux@lemmy.mlAntiviruses?
    link
    fedilink
    arrow-up
    14
    ·
    edit-2
    4 months ago

    I would say there are not any worth recommending and that best practices are avoiding running random scripts you don’t understand, keeping software up to date with package managers, and using virtualization tools. Also look into Portmaster perhaps which is an interactive firewall.

    Meta rant on this subject

    What frustrates me about the answers these questions get is no one ever offers tools comparable to Windows tools, perhaps I think increasingly because they simply don’t exist outside of very expensive subscription enterprise offerings that require plunking down no less than a thousand dollars a year. (Certainly none of the major AV vendors offers consumer Linux versions of their software though most offer enterprise endpoint Linux that comes with the caveat of minimum spends of several hundred dollars if not several thousand a year)

    ClamAV is primarily a definition AV, the very weakest and most useless kind. Sure it’s kind of useful to make sure your file server isn’t passing around year old malware but it’s basically useless for real time prevention of emerging and unknown threats. For that you needs HIPS, behavior control, conditional/mandatory access control, heuristics, etc. ClamAV has one of the worst detection rates in the industry. It’s just laughably bad (often under 60%) so it’s really not a front line contender at all.

    Compare clam to consumer offerings with complex behavioral control like ESET, Kaspersky, etc that offered “suite” software that featured the aforementioned HIPS, behavioral control, complex heuristics to detect and in real time block malware-like behavior (for example accessing and then seeking to upload your keepass database files or starting to surreptitiously encrypt all your user files using RSA4096) and it just isn’t in the same ballpark as anything competently done in the last 20 years.

    I haven’t used or relied on a traditional AV for definition detections for years. They’re worthless, it’s impossible to keep up. The AV’s I’ve deployed are for their heuristics, behavior control, HIPS, etc which actually stops new and emerging and unknown threats or at least puts real obstacles in their way. So what Linux needs, what users need is software like that, forget the traditional virus definitions, something with behavior control, HIPS, and some basic heuristics for “gee this sure looks like malware behavior, better ask the user whether they want and intend this”.

    “Just be smart about what you run” isn’t a realistic solution when people say Linux is for everyone including their tech illiterate relatives. Yes, Linux is a lot safer if you just install things from package managers but that isn’t bulletproof either as we’ve seen a number of spectacular impact upstream malware insertions into build repos for huge software projects in recent years.

    Just maintain back-ups isn’t helpful with smart cryptolocker software which may hide itself for weeks or months and encrypt your files as you back them up. Nor does it protect against account compromise from all your passwords being stolen or a keylogger. Nor does it defend you against persecution after being hit by mercenary/government police-ware and spyware from overreaching governments and makes the bar for them getting evidence you’re an illegal gay person or whatever that much lower technically in terms of capabilities.

    Back-ups are disaster recovery. Everyone should have them but part of a layered defense is preventing the disaster and inconvenience and invasion of privacy and so on before it happens. Having your identity stolen or accounts taken over isn’t as simple as reverting to a back-up, it can result in hours, days of phone calls, emails, stress, hassle, etc that can drag on for weeks or months.

    Portmaster is a start for this type of system control and protection as it’s a very effective interactive firewall but as far as I know there aren’t any consumer available comprehensive behavior control + HIPS type Linux desktop security solutions. There are several vendors of default deny mandatory access control with interactive mode for Windows but none offer solutions for Linux that aren’t part of enterprise sized contracts beyond affordability and reason. If anyone knows otherwise I would love to know of these solutions as I want to implement them on my Linux machines as I am not comfortable with just my network IPS and firewall solutions by themselves without comprehensive end-point security.


  • I think the home media collector usecase is actually a complete outlier in terms of what these formats are actually being developed for.

    Well yeah given who makes it but it’s what I care about. I couldn’t care less about obscure and academic efforts (or the profits of some evil tech companies) except as vague curiosities. HEVC wasn’t designed with people like me in mind either yet it means I can have oh 30% more stuff for the same space usage and the enccoders are mature enough that the difference in encode time between it and AVC is negligible on a decently powered server.

    Transparency (or great visual fidelity period) also isn’t likely the top concern here because development is driven by companies that want to save money on bandwidth and perhaps on CDN storage.

    Which I think is a shame. Lower bitrates for transparency -should- be the goal. The goal should be to get streaming content to consumers at a very high quality, ideally close to or equivalent to UHD BluRay for 4k. Instead we get companies that bit-starve and hop onto these new encoders because they can use fewer bits as long as they use plenty of tricks to maintain a certain baseline of perceptual visual image quality that passes the sniff test for your average viewer so instead of getting quality bumps we just get them using less bits and passing the savings onto themselves with little meaningful upgrade in visual fidelity for the viewer. Which is why it’s hard to care at all really about a lot of this stuff if it doesn’t benefit the user in any way really.


  • And which will be so resource intensive to encode with compared to existing standards that it’ll probably take 14 years before home media collectors (or yar har types) are able and willing to use it over HEVC and AV1. :\

    As an example AV1 encodes to this day are extremely rare in the p2p scene. Most groups still work with h264 or h265 even those focusing specifically on reducing sizes while maintaining quality. By contrast HEVC had significant uptake within 3-4 years of its release in the p2p scene (we’re on year 7 for AV1).

    These greedy, race to the bottom device-makers are still fighting AV1. With people keeping devices longer and not upgrading as much as well as tons of people relying on under-powered smart-TVs for watching (forcing streaming services to maintain older codecs like h264/h265 to keep those customers) means it’s going to take a depressingly long time to be anything but a web streaming phenomenon I fear.



  • Probably the best choice if OP is dreading 11. Put it off, hope that in 3 years Linux support has matured even more for their use cases.

    MS support has used this software themselves in an edge case where they couldn’t get Windows to active properly.

    You have two options here:

    1. Enable the extended support (no pay needed with this software but if OP absolutely refuses to run it they can pay Microsoft money directly though it takes work to find where to do that at) and run on that for 3 years until 2028.

    2. Upgrade to LTSC IOT using the method they outline at the link there. Again they have two options, one is free, the other is following that guide but paying for a gray-market key (G2a for instance) for LTSC IOT which would avoid running this software on their PC but would mean paying someone some money for a corporate volume key they’re not technically allowed to sell. Which means support until 2032.




  • The only thing I would note is -IF- your volumes are not partition or disk based BUT -files- based there is the possibility that corruption of the host file system of the disk the files containing the volumes are on could result in pieces of those files being marked unreadable by the disk and it’s POSSIBLE one way to solve this would be a file system check utility.

    HOWEVER such activities carry a -large- risk of data loss so I would advise a bit for bit copy of the disk and doing the repair on that so if it goes wrong you’re not worse off. -IF- you cannot make a copy then I would advise at least trying to mount using backup headers before doing that and copying off anything you can salvage as file system checks can really mess up data recovery and should only be used in certain circumstances.

    You’re much better off trying the recovery software I linked in fact than doing a file system check as it will tend to have better results.

    You can also use the option to mount as read only in VC to prevent writes to a suspected failing disk.

    Let me know if you need further advice.


  • Veracrypt has back-up headers located elsewhere in the volume that are unlikely to have been overwritten.

    First thing’s first I would strongly recommend copying the drive as it currently exists bit for bit to another drive of equal or larger size. Don’t work on the original if you can help it.

    Now with this copy, you should try to check the option to use the backup header when mounting and try again. If the partition is gone and veracrypt doesn’t see it you’ll need to try using something that recovers partitions and doesn’t mind encrypted partitions or partitions or file system types it doesn’t understand and use that to ON THE COPY recover and recreate the partition (this will write data and can cause the possibility of further loss or worsen your ability to recover which is why it is important to perform it on a copy). Testdesk may work for this but there are other options that probably are better.

    See this list: https://old.reddit.com/r/datarecovery/wiki/software and choose something from there if this data is truly important. Again only work on a copy on another drive. Some of these software examples actually work against the original drive and make a copy elsewhere and should be safe to use on the original drive so long as they have you select a target drive to push the recovered data to but read the documentation. Testdisk absolutely must be used on a copy.

    You will incur data loss and likely should run one of the file recovery software mentioned on the drive once successfully mounted in veracrypt to attempt to recover as much as possible.


  • Use secure erase function which is built into the SATA and other specs, it applies a voltage spike to clear the cells of all held charges thus wiping them. This happens near instantly, it’ll be a process that will signal it’s finished within a minute and takes much less time than that.

    If you want to be extra paranoid I suppose you could follow that up by encrypting the entire (empty) drive and then doing it again though I’m not sure this has any benefit however it’s the closest to forcing the cells to be used again and then cleared again. However this does not guarantee that exhausted and worn out areas are flash are not potentially spared both. It’s unlikely for large amounts of data to be recovered from this unless your drive is failing or has been completely worn out but it’s also why if you ever store sensitive data on an SSD it’s preferable to do so in an encrypted form (such as encrypting the whole disk or partition).



  • Yes, absolutely. And they can drag Canonical into it as well if they wish though it’s harder. Being UK based doesn’t protect them from the long arm of US law including arresting any US personnel, freezing and seizing their funds, putting out arrest warrants for and harassing those in the UK with the fear of arrest and rendition to the US if they go to a third country (for a conference, vacation, etc, most would buckle rather than live under that). Additionally the US could sanction them for non-cooperation by making it illegal for US companies to sell them products and services, for US citizens to work for or aid them, etc.

    They can go after community led projects too, just send the feds over to the houses of some senior US developers and threaten and intimidate them, intimate their imminent arrest and prison sentence unless they stop contact and work with parties from whatever countries the US wishes to choose to name. Raid their houses, seize their electronics, detain them for hours in poor conditions. Lots of ways to apply pressure that doesn’t even have to stand up to extensive legal scrutiny (they can keep devices and things and the people would have to sue to get them back).

    The code itself is likely to exist in multiple places so if someone wanted to fork from say next week’s builds for an EU build they could and there would be little the US could do to stop that but they could stop cooperation and force these developers to apply technical measures to attempt to prevent downloads from IP addresses known to belong to sanctioned countries of their choosing.

    It’s not like the US can slam the door and take its Linux home and China and the EU and Russia are left with nothing, they’d still have old builds and code and could develop off of those though with broken international cooperation it would be a fragmented process prone to various teething issues.