• 0 Posts
  • 24 Comments
Joined 3 years ago
cake
Cake day: June 7th, 2023

help-circle

  • While I don’t know the specific post you are referring to, Malware exists for Linux. Here’s a great overview from last year. If someone wants to argue, “oh it’s from a security company trying to sell a product” then let me point you at the Malware Bazaar and specifically the malware tagged elf. Those are real samples of real malware in the Linux specific ELF executable binary format (warning: yes it’s real malware, don’t run anything from this site). On the upshot, most seem to be Linux variants of the Mirai botnet. Not something you want running, but not quite as bad as ransomware. But, dig a bit and there are other threats. Linux malware exists, it has for a long time and it’s getting more prevalent as more stuff (especially servers) run on Linux.

    While Linux is far more secure than Windows by design, it’s not malware proof. It is harder for malware to move from user space into root (usually), but that’s often not needed for the activities malware gets up to today. Ransomware, crypto miners and info stealers will all happily execute in user-land. And for most people, this is where their important stuff lives. Linux’s days of living in “security through obscurity” are over. Attackers are looking at Linux now and starting to go after it.

    All that said, is it worth having a bloated A/V engine doing full on-access scanning? That depends on how you view the risk. Many of the drive-by type attacks (e.g. ClickFix, fake tech-support scams) all heavily target Windows and would fail on a Linux system. The malware and backdoors that come bundled with pirated software are likely to fail on a Linux system, though I’ll admit to not having tested that sort of thing with Wine/Proton installed. For those use cases, I’d suggest not downloading pirated software. Or, if you absolutely are going to, run those file through ClamAV at minimum.

    Personally, I don’t feel the need to run anything as heavy as on-access file scanning or anything to keep trawling memory for signatures on my home systems. Keeping software up to date and limiting what I download, install and run is enough to manage my risk. I do have ClamAV installed to let me do a quick, manual scan of anything I do download. But, I wouldn’t go so far as to buy A/V product. Most of the engines out there for Linux are crap anyway.

    Professionally, I am one of the voices who pushed for A/V (really EDR) on the Linux systems in my work environment. My organization has a notable Linux footprint and we’ve seen attackers move to Linux based systems specifically because they are less likely to be well monitored. In a work environment, we have less control over how the systems get (ab)used and have a higher need for telemetry and investigation.


  • You could try using Autopsy to look for files on the drive. Autopsy is a forensic analysis toolkit, which is normally used to extract evidence from disk images or the like. But, you can add local drives as data sources and that should let you browse the slack space of the filesystem for lost files. This video (not mine, just a good enough reference) should help you get started. It’s certainly not as simple as the photorec method, but it tends to be more comprehensive.


  • I can think of a couple of reasons off the top of my head.

    You don’t say, but I assume you are working on-site with your work system. So, the first consideration would be a firewall at your work’s network perimeter. A common security practice is to block outbound connections on unusual ports. This usually means anything not 80/tcp or 443/tcp. Other ports will be allowed on an exception basis. For example, developers may be allowed to access 22/tcp outbound, though that may also be limited to only specific remote IP addresses.

    You may also have some sort of proxy and/or Cloud Access Security Broker (CASB) software running on your work system. This setup would be used to inspect the network connections your work system is making and allow/block based on various policy settings. For example, a CASB might be configured to look at a domain reputation service and block connections to any domain whose reputation is consider suspect or malicious. Domains may also be blocked based on things like age, or category. For this type of block, the port used won’t matter. It will just be “domain something.tld looks sketchy, so block all the things”. With “sketchy” being defined by the company in it’s various access policies.

    A last reason could be application control. If the services you are trying to connect to rely on a local program running on your work system, it’s possible that the system is set to prevent unknown applications from running. This setup is less common, but it growing in popularity (it just sucks big old donkey balls to get setup and maintain). The idea being that only known and trusted applications are allowed to run on the system, and everything else is blocked by default. This looks like an application just crashing to the end user (you), but it provides a pretty nice layer of protection for the network defenders.

    Messing with the local pc is of course forbidden.

    Ya, that’s pretty normal. If you have something you really need to use, talk with your network security team. Most of us network defenders are pretty reasonable people who just want to keep the network safe, without impacting the business. That said, I suspect you’re going to run into issues with what you are trying to run. Something like SyncThing or some cloud based storage is really useful for businesses. But, businesses aren’t going to be so keen to have you backing their data up to your home server. Sure, that might not be your intention, but this is now another possible path for data to leave the network which they need to keep an eye on. All because you want to store your personal data on your work system. That’s not going to go over well. Even worse, you’re probably going to be somewhat resistant when they ask you to start feeding your server’s logs into the businesses log repository. Since this is what they would need to prove that you aren’t sending business data to it. It’s just a bad idea all around.

    I’d suspect Paperless is going to run into similar issues. It’s a pretty obvious way for you to steal company data. Sure, this is probably not your intention, but the network defenders have to consider that possibility. Again, they are likely to outright deny it. Though if you and enough folks at your company want to use something like this, talk with your IT teams, it might be possible to get an instance hosted by the business for business use. There is no guarantee, but if it’s a useful productivity package, maybe you will have a really positive project under your belt to talk about.

    FreshRSS you might be able to get going. Instead of segregating services by port, stand up something like NGinx on port 443 and configure it as a reverse proxy. Use host headers to separate services such that you have sync.yourdomain.tld mapped to your SyncThing instance, office.yourdomain.tld mapped to your paperless instance and rss.yourdomain.tld mapped to FreshRSS. This gets you around issues with port blocking and makes managing TLS certificates easier. You can have a single cert sitting in front of all your services, rather than needing to configure TLS for each service individually.




  • If the goal is stability, I would have likely started with an immutable OS. This creates certain assurances for the base OS to be in a known good state.
    With that base, I’d tend towards:
    Flatpak > Container > AppImage

    My reasoning for this being:

    1. Installing software should not effect the base OS (nor can it with an immutable OS). Changes to the base OS and system libraries are a major source of instability and dependency hell. So, everything should be self contained.
    2. Installing one software package should not effect another software package. This is basically pushing software towards being immutable as well. The install of Software Package 1, should have no way to bork Software Package 2. Hence the need for isolating those packages as flatpaks, AppImages or containers.
    3. Software should be updated (even on Linux, install your fucking updates). This is why I have Flatpak at the top of the list, it has a built in mechanism for updating. Container images can be made to update reasonably automatically, but have risks. By using something like docker-compose and having services tied to the “:latest” tag, images would auto-update. However, its possible to have stacks where a breaking change is made in one service before another service is able to deal with it. So, I tend to tag things to specific versions and update those manually. Finally, while I really like AppImages, updating them is 100% manual.

    This leaves the question of apt packages or doing installs via make. And the answer is: don’t do that. If there is not a flatpak, appimage, or pre-made container, make your own container. Docker files are really simple. Sure, they can get super complex and do some amazing stuff. You don’t need that for a single software package. Make simple, reasonable choices and keep all the craziness of that software package walled off from everything else.


  • sylver_dragon@lemmy.worldtoLinux@lemmy.mlAntiviruses?
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 months ago

    Ultimately, it’s going to be down to your risk profile. What do you have on your machine which would wouldn’t want to lose or have released publicly? For many folks, we have things like pictures and personal documents which we would be rather upset about if they ended up ransomed. And sadly, ransomware exists for Linux. Lockbit, for example is known to have a Linux variant. And this is something which does not require root access to do damage. Most of the stuff you care about as a user exists in user space and is therefore susceptible to malware running in a user context.

    The upshot is that due care can prevent a lot of malware. Don’t download pirated software, don’t run random scripts/binaries you find on the internet, watch for scam sites trying to convince you to paste random bash commands into the console (Clickfix is after Linux now). But, people make mistakes and it’s entirely possible you’ll make one and get nailed. If you feel the need to pull stuff down from the internet regularly, you might want to have something running as a last line of defense.

    That said, ClamAV is probably sufficient. It has a real-time scanning daemon and you can run regular, scheduled scans. For most home users, that’s enough. It won’t catch anything truly novel, but most people don’t get hit by the truly novel stuff. It’s more likely you’ll be browsing for porn/pirated movies and either get served a Clickfix/Fake AV page or you’ll get tricked into running a binary you thought was a movie. Most of these will be known attacks and should be caught by A/V. Of course, nothing is perfect. So, have good backups as well.


  • I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
    I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

    These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.



  • It’s been a few of years since did my initial setup (8 apparently, just checked); so, my info is definitely out of date. Looking at the Ubuntu site they still list Ubuntu 16.04, but I think the info on setting it up is still valid. Though, it looks like they only list setting up a mirror or a stripe set without parity. A mirror is fine, but you trade half your storage space for complete data redundancy. That can make sense, but usually not for a self hosting situation. A stripe set without parity is only useful for losing data, never use this. The option you’ll want is a raidz, which is a stripe set with parity. The command will look like:

    zpool create zpool raidz /dev/sdb /dev/sdc /dev/sdd
    

    This would create a zpool named “zpool” from the drives at /dev/sdb, /dev/sdc and /dev/sdd.

    I would suggest spending some time reading up on the setup. It was actually pretty simple to do, but it’s good to have a foundation to work with. I also have this link bookmarked, as it was really helpful for getting rolling snapshots setup. As with the data redundancy given by RAID, it does not replace backups; but, can be used as part of a backup strategy. They also help when you make a mistake and delete/overwrite a file.

    Finally, to answer your question about hardware, my recollection and experience has been that ZFS is not terribly demanding of CPU. I ran a Intel Core i3 for most of the server’s life and only upgraded when I realized that I wanted to game servers on it. Memory is more of an issue. The minimum requrement most often cited is 8GB, but I also saw a rule of thumb that you want 1GB of memory for each TB of storage. In the end, I went with 8GB of RAM, as I only had 4TB of storage (3 2TB disks in a RAIDZ1). But, also think about what other workloads you have on the system. When built, I was only running NextCloud, NGinx, Splunk, PiHole and WordPress (all in docker containers). And the initial 8GB of RAM was doing just fine. When I started running game servers, I stared to run into issues. I now have 16GB and am mostly fine. Some game servers can be a bit heavy (e.g. Minecraft, because fucking Java), but I don’t normally see problems. Also, since the link I provided mentioned it, skip ECC memory. it’s almost never worth the cost, and for home use that “almost never” gets much closer to “actually never”.

    When choosing disks, keep in mind that you will need a minimum of 2 disks and you effectively lose the storage space of one of the disks in the pool to parity storage (assuming all disks are the same size). Also, it is best for all of the disks to be the same size. You can technically use different size disks in the same pool; but, the larger disks get treated as the same size as the smaller disks. So long as the pool is healthy, read speeds are better than a single disk as the read can be spread out among the pool. But, write speeds can be slower, as the parity needs to be calculated at write time. Otherwise, you’re pretty free to choose any disks which will be recognized by the OS. You mention that 1TB is filling up; so, you’ll want to pick something bigger. I mentioned using spinning disks, as they can provide a lot more space for the money. Something like a 14TB WD Red drive can be had for $280 ($20/TB). With three of those in a RAIDZ1 pool, you get ~28TB of storage and can tolerate one disk failure , without losing data. With solid state disks, you can expect costs closer to $80/TB. Though, there is a tradeoff in speed. So, you need to consider what type of workloads you expect the storage pool to handle. Video editing on spinning rust is not going to be fun. Streaming video at 4k is probably OK, though 8k is going to struggle.

    A couple other things think about are space in the chassis, drive connections and power. Chassis space is pretty obvious, you gotta put the disks in the box. Technically, you don’t have to mount the disks, they can just be sitting at the bottom of the case, but this can cause problems with heat shortening the lifespan of the drives. It’s best to have them properly mounted and fans pushing air over them. Drive connections are one of those, you either have the headers or you don’t. Make sure your motherboard can support 3 more drives with the chosen interface (SATA, NVMe, etc.) before you get the drives. Nothing sucks more than having a fancy new drive only to be unable to plug it into the motherboard. Lastly, drives (and especially spinning drives) can be power hungry. Make sure your power supply can support the extra power requirements.

    Good luck whatever route you pick.


  • Probably the easiest solution would be to just chuck a larger disk in the system and retain the original drive for the operating system. If you do not need the high speed of an SSD, you may be able to get more storage space for the money by going with a spinning disk. 7200RPM drives are fast enough for most applications, though you may run into issues streaming 4K (or higher) resolution video.

    Another option would be to start building out a storage pool using some type of RAID technology. On my own server, I use ZFS for the data partition. It is basically a software RAID. I use a RAID-Z1 configuration, which stripes the data over multiple disks (three in my case) and uses a parity calculation to provide data redundancy. It also has the advantage that it can be expanded to new disks dynamically and does not require that all disks are the same size. Initial setup does require more work and you are now monitoring multiple physical disks, but having a unified storage pool and redundancy is a nice way to go.

    Any way you go, just make sure you have good backups. Drives fail, and sometimes even early in their life. Backblaze reports can be an interesting read when looking at drive options, as they really do put the drives through the wringer.


  • Yes, though depending on the media you are running the OS and game from, the performance could be worse than you would expect from an install on the main system media. For example, when I was testing moving over, I had Arch installed on a USB device and had some issues with I/O bandwidth. But, I also had a folder on my main storage drive to run Steam games from and this performed OK. It was formatted NTFS; so, there were some other oddities. But, it worked just fine and managed to convince me that I’d do OK under Linux. Took the plunge and I’ve been happy with the decision ever since.


  • do any of you hate how self-hosting services like photo- or document-management systems, or even a simple rss tool, forces you to sort your stuff out, and put your decades old files in order?!

    What is this “sort” thing you speak of? I don’t sort anything, I have NextCloud syncing my entire photos, videos and documents folders and they are just as messy as ever. Granted, I do go through my photos and videos once a year and dump them in a folder named for the year they were taken. Occasionally, I’ll go hog wild and try to sort some of a year’s photos/videos into folders named after events. Though, that hasn’t happened in a number of years. I setup NextCloud so I could have everything synced to my own server and just forget, not have to deal with labeling my data.

    As for bookmarks. I already keep those in folders; but, I don’t sync those. I use my desktop far more than I use my phone for web browsing. And the types of things I use my phone for (mostly recipes), I just keep bookmarked there.


  • No, if you open a terminal and run:
    sudo dmesg

    You should get a long output which is the kernel log. Assuming the crash happened recently, there may be something in the last few lines (bottom of the output) which could indicate why the process died (or was killed). Keep in mind that this is a running log; so, if it’s been a while since the crash, the entries for it may be higher up in the log. It’s often best (if you can) to trigger the problem then immediately go run the sudo dmesg command and look at the output. With luck, there will be useful logs. If not, you may need to look elsewhere.




  • It depends on the environment. I’ve been in a couple of places which use Linux for various professional purposes. At one site, all systems with a network connection were required to have A/V, on-access scanning and regular system scans. So, even the Linux systems had a full A/V agent and we were in the process of rolling out EDR to all Linux based hosts when I left. That was a site where security tended to be prioritized, though much of it was also “checkbox security”. At another site, A/V didn’t really exist on Linux systems and they were basically black boxes on the network, with zero security oversight. Last I heard, that was finally starting to change and Linux hosts were getting the full A/V and EDR treatment. Though, that’s always a long process. I also see a similar level of complacency in “the cloud”. Devs spin random shit up, give it a public IP, set the VPS to a default allow and act like it’s somehow secure because, “it’s in the cloud”. Some of that will be Linux based. And in six months to a year, it’s woefully out of date, probably running software with known vulnerabilities, fully exposed to the internet and the dev who spun it up may or may not be with the company anymore. Also, since they were “agile”, the documentation for the system is filed under “lol, wut?”

    Overall, I think Linux systems are a mixed bag. For a long time, they just weren’t targeted with normal malware. And this led to a lot of complacency. Most sites I have been at have had a few Linux systems kicking about; but, because they were “one off” systems and from a certain sense of invulnerability they were poorly updated and often lacked a secure baseline configuration. The whole “Linux doesn’t get malware” mantra was used to avoid security scrutiny. At the same time, Linux system do tend to default to a more secure configuration. You’re not going to get a BlueKeep type vulnerability from a default config. Still, it’s not hard for someone who doesn’t know any better to end up with a vulnerable system. And things like ransomware, password stealers, RATs or other basic attacks often run just fine in a user context. It’s only when the attacker needs to get root that things get harder.

    In a way, I’d actually appreciate a wide scale, well publicized ransomware attack on Linux systems. First off, it would show that Linux is finally big enough for attackers to care about. Second, it would provide concrete proof as to why Linux systems should be given as much attention and centrally managed/secured in the Enterprise. I know everyone hates dealing with IT for provisioning systems, and the security software sucks balls; but, given the constant barrage of attacks, those sorts of things really are needed.


  • It depends on what your goals are.

    • Ventoy is good for having an alternate OS on a Thumbdrive. Even with a USB 3 device, you may encounter I/O blocking and find this isn’t suitable as a “daily driver” OS. However,. for booting something like Tails or Windows/Linux for OS specific hardware/applications, it can be a good solution.
    • Dualbooting is a good way to “test drive” an alternate OS and also have a way to fallback to the other OS if you regularly need access to some software which only runs on that OS. This is likely to have better performance than the USB/Ventoy setup at the cost of Windows fucking up the bootloader config from time to time.
    • Windows/Linux with a Linux/Windows VM is useful when you know what OS you want to run on a day to day basis, but have some reason to reach into the other OS on occasion and aren’t too worried about performance and hardware access in the alternate OS.

    Ultimately, it’s going to come down to what you are trying to do and why you want to run multiple Operating Systems. For example, my main system is running Linux. But, I want the ability to run Windows malware in a controlled sandbox (not a euphemism, I work in cybersecurity and lab some stuff for fun). So, I have KVM setup to run Virtual Machines, including Windows.

    For another example, prior to making the switch to Linux, I had Windows as my primary OS and booted Linux on a USB stick (not Ventoy, but close enough). This let me gain confidence that I would be able to make the jump.

    I don’t have a good example for dual booting. Maybe something like a SteamDeck where you want a stable, functional OS most of the time; but, have some games which will only run in Windows.


  • It makes little sense why it works on an offsite WiFi, but not mobile data.

    I’d agree with unbuckled above, it’s a DNS issue. If your mobile device is capable, use nslookup or dig to see what responses you are getting in different scenarios. It’s possible that your VPN software is leaking DNS queries out to the mobile data provider’s DNS servers while you are on mobile data and only using the correct DNS settings when you are on wifi. Possibly look for split tunnel settings in the VPN software, as this can create this type of situation.

    You can also confirm this from the pihole side. Connect to the VPN via mobile data and browse to some website you don’t use often, but is not your own internal stuff. Then open the query log on your pihole and see if that domain shows up. I’d put money on that query not showing in the pihole query log.