Wow, those are big networks. Obviously I suppose in case of AWS it doesn’t matter as no human visitor (except maybe some VPN connection?) will visit from there.
As someone who bans /32 IPs only, is the main advantage resource consumption?
🇮🇹 🇪🇪 🖥
Wow, those are big networks. Obviously I suppose in case of AWS it doesn’t matter as no human visitor (except maybe some VPN connection?) will visit from there.
As someone who bans /32 IPs only, is the main advantage resource consumption?
I presume you mean running Plex in host namespace. I don’t do that as I run the synology package, but I can totally see the issue you mean.
Running in host namespace is bad, not terrible, especially because my NAS in on a separate VLAN, so besides being able to reach other NAS local services, cannot do do much. Much much much less risk than exposing the service on the internet (which I also don’t).
Also, this all is not a problem for me, I don’t use remote streaming at all, hence why I am also experimenting with jellyfin. If I were though, I would have only 2 options: expose jellyfin on the internet, maybe with some hacky IP whitelist, or expect my mom to understand VPNs for her TV.
(which doesn’t harden security as much as you think)
Would be nice to elaborate this. I think it reduces a lot of risk, compared to exposing the service publicly. Any vulnerability of the software can’t be directly exploited because the Plex server is not reachable, you need an intermediate point of compromise. Maybe Plex infra can be exploited, but that’s a massively different type of attack compared to the opportunities and no-cost “run shodab to check exposed Plex instances” attack.
No that’s the thing. Plex can also use their infra as a tunneling system. You can have remote streaming without exposing Plex publicly and without VPN. It is slow though.
Well, as an application it has a huge attack surface, it’s also able to download stuff from internet (e.g., subs) and many people run it on NAS. I run jellyfin in docker, I didn’t do a security assessment yet, but for sure it needs volume mounts, not sure about what capabilities it runs with (surely NET_BIND, and I think DAC_READ_SEARCH to avoid file ownership issues with downloaders?). Either way, I would never expose a service like that on the internet.
Not to be “achtuallying” bit VPN is not a way to remote stream, it’s a way to bring remote clients in the local network.
Likewise exposing services on the internet…not really going to happen esepcially for people - like me - that run plex/jellyfin on their NAS.
I don’t have a horse in this race, i don’t use remote streaming, I only ever streamed from my nas to my 2 TVs, and I am experimenting with jellyfin. But for those who do need remote streaming, jellyfin is going to be problematic.
Run it with sudo in case you don’t see the process name with the above command.
sudo ss -patln | grep 443
I like the idea of canaries in documents, I think is a good point but obviously it only applies to certain types of data. Still a good idea.
Looking at OP, they seem a small shop, with a limited budget. Seriously the best recommendation I think is to use some kind of remote storage for data (works as long as the employee complies) and to make sure the access control is done in a decent way (reducing the blast of employee behaving maliciously). Anything else is probably out of reach for a small company without a security department.
Maybe I sounded too harsh, that’s just because in this post I have seen all kinds of comments who completely missed the point (IMHO) and suggested super complicated technical implementations that show how disconnected some people can be from real technical operations, despite the good tech skills.
DLP solutions are honestly a joke. 99% of the case they only cost you a fortune and prevent nothing. DLP is literally a corporate religion.
What you mentioned also makes sense if you are windows shop running AD. If you are not, setting it up to lock 1 workstation is insane.
Also, the moment the data gets put on the workstation you failed. Blocking USB is still a good idea, but does very little (network exfiltration is trivial, including with DLP solutions). So the idea to use remotely a machine is a decent control, and all efforts and resources should be put in place to prevent data leaving that machine. Obviously even this is imperfect, because if I can see the data on my screen I can take a picture and OCR it. So the effort needs to go in ensuring the data is accessed on a need basis.
Jamf doesn’t do anything for this problem, besides costing you a fortune in both license and maintenance/operation. Especially if you are not a Mac shop.
MDM at most can be used as a reactive tool to do something on the machine - as long as the one with the machine in their hand leaves the network connection on.
There are much cheaper solution to do that for 1 machine, and -as others correctly pointed out- the only solution (partial) here is not storing the data on a machine you don’t control. Period.
Yeah, that’s what I wrote too, but that is still a very fragile way. For once, you depend on a network connections, or in the local firewall not blocking you etc.
Reactive, on-demand ssh is something you can do for tech support, not for security imho.
Disk encryption is a control against lost or stolen device and malicious physical access (kinda). Storing the data elsewhere is more a control (or the basis for controls) against malicious insiders.
Your ability to SSH in the machine depends on the network connectivity. Knowing the IP does nothing if the SSH port is not forwarded by the router or if you don’t establish a reverse tunnel yourself with a public host. As a company you can do changes to the client device, but you can’t do them on the employee’s network (and they might not even be connected there). So the only option is to have the machine establish a reverse tunnel, and this removes even the need for dynamic DNS (which also might not work in certain ISPs).
The no-sudo is also easier said than done, that means you will need to assist every time the employee needs a new package installed, you need to set unattended upgrades and of course help with debugging should something break. Depending on the job type, this might be possible.
I still think this approach (lock laptop) is an old, ineffective approach (vs zero-trust + remote data).
Useful for standardized management of fleets, but requires personnel to maintain and configure it, but I don’t think it’s very effective (or feasible - I doubt they will even join the call for a 1-device contract) for what OP needs.
This is honestly an extremely expensive (in terms of skills, maintenance, chance of messing up) solution for a small shop that doesn’t mitigate at all the threats posed.
You said correctly, the employee has the final word on what happens to the data appearing on their screen. Especially in the case of client data (I.e., few and sensitive pieces of data), it might even be possible to take pictures of the screen (or type it manually) and all the time invested in (imperfect) solutions to restrict drives and network (essentially impossible unless you have a whitelist of IPs/URLs) goes out the window too.
To me it seems this problemi is simply approached from the wrong angle: once the data is on a machine you don’t trust, it’s gone. It’s not just the employee, it’s anybody who compromises that workstation or accesses it while left unlocked. The only approach to solving the issue OP is having is simply avoiding for the data to be stored on the machine in the first place, and making sure that the access is only for the data actually needed.
Data should be stored in the company-controlled infrastructure (be in cloud storage, object storage, a privileged-access workstation, etc.) and controls should be applied there (I.e., monitor for data transfers, network controls, etc.). This solves both the availability concerns (what if the laptop gets stolen, or breaks) and some of the security concerns. The employee will need to authenticate each time with a short-lived token to access the data, which means revoking access is also easy.
This still does not solve the fundamental problem: if the employee can see the data, they can take it. There is nothing that can be done about this, besides ensuring that the data is minimised and the employee has only access to what’s strictly needed.
For browser, there is a webapp that can be selfhosted. See here https://github.com/logseq/logseq/blob/master/docs/docker-web-app-guide.md
I think you need chromium browsers due to the API they use, but it should work.
Objdct storage is anyway something I prefer over their app. Restic(/rustic) does the backup client side. B2 or any other storage to just save the data. This way you also have no vendor lock.