VMs won’t do for long, because you won’t have proper acceleration as it’s required by gfx apps like Lightroom. Sure, they’ll work, but you’ll experience slowdowns. You can run accelerated VMs, but I find them buggy.
If you’re going to dual boot, you should install Linux on a separate DRIVE, not just a partition, and install the bootloader on that second drive. You force Linux to do that by disabling in the BIOS the Windows drive first, before installation. Then, you re-enable it again. Then you can choose what to boot at using F12 during boot time. If you put them on the same drive, Windows will eventually overwrite the bootloader.
GPU pass through with VFIO is literally how services like GeForce Now, PSN Streaming work. It’s not too buggy if your system is set up properly and it’s far better than dual boot.
Be prepared to commit some time to the transition though. I installed darktable and gave it a try a couple days ago. There’s enough similarity to make you think it’ll be an easy transition, but it sure wasn’t for me. It took me an hour to do something that would have taken 5 mins in Lightroom. I’m glad to have it, and it seems like a powerful tool, so I’m not complaining, just sayin… be prepared to commit some time to learning where everything is.
Yes, it does not have ML denoise, but there are very good reasons why you don’t want to have that in your raw pipeline. Sure, after raw development is fine, but denoise in a raw pipeline needs to maximise the signal-to-noise ratio. Machine learning denoising would introduce hallucinations, which are not real signal, and that’s why it’s best kept out of raw files.
Well, yes, some specific camera support features are missing, such as Fujifilm look-up tables, it still is the best raw editor I have used in my entire life and I can highly recommend it.
Are you sure the pipeline works that way? I know what you mean and it would seem like a huge oversight on their part to apply the denoise before the other edits. I would assume that increasing exposure, for example, would ignore the applied denoise rather than apply it overtop? If that wording makes any sense
Regardless I’ve used it to rescue photos I’ve taken on a nexus 4 over a decade ago, making them look like proper photos, and I find the feature so useful that it’s irreplaceable to me
The other feature is the AI content aware fill. In darktable, can you circle a piece of garbage on the ground and effortlessly remove it? Or do you have to do some manual clone stamping etc etc?
In a recent instance, a friend requested an album cover from a 3x2 image that I needed to expand to be 1x1. Can you tell that the left and right edges of this are not real? I don’t think I would have been skilled enough to pull this off without AI tools. https://f4.bcbits.com/img/a1356058193_10.jpg
Please understand me correctly: Machine learning does have its use as an image editing. but not in raw development. Sure, VFX are fine, but the goal of raw development is to make the files that the camera has put out look as good as possible. And for that, machine learning is inadequate, because again, hallucinations and other defects. Once you have processed your image with the raw development software, sure. Machine learning, denoising, expansion and other VFX will certainly work. But my recommendation is to keep it out of raw development, not for purist reasons, but because there’s genuinely no reason to use it, as we need to get precise results from raw first, and then we can add our VFX on top of that.
About the raw pipeline of Darktable, this is one of its greatest features. You can freely reconfigure it to suit your needs. By default, it uses a scene-referred workflow and you should really stick with that. But if you’re an advanced user, you can freely shuffle the modules around as you like, just like you can do in DaVinci Resolve.
Edit: For beginners, really stick with the modules that you’ll find in the different headers. Also, manipulate your modules starting from bottom to top, as this is the processing order for the modules.
VMs won’t do for long, because you won’t have proper acceleration as it’s required by gfx apps like Lightroom. Sure, they’ll work, but you’ll experience slowdowns. You can run accelerated VMs, but I find them buggy.
If you’re going to dual boot, you should install Linux on a separate DRIVE, not just a partition, and install the bootloader on that second drive. You force Linux to do that by disabling in the BIOS the Windows drive first, before installation. Then, you re-enable it again. Then you can choose what to boot at using F12 during boot time. If you put them on the same drive, Windows will eventually overwrite the bootloader.
The ideal thing is to actually move to Darktable. https://mathiashueber.com/migrate-from-lightroom-to-open-source-alternative/
GPU pass through with VFIO is literally how services like GeForce Now, PSN Streaming work. It’s not too buggy if your system is set up properly and it’s far better than dual boot.
password protect your bios. that’s the only way i found for windows not to mess it up on my machine, seriously.
Single gpu passthrough vm works flawlessly if you can take the time to set it up
Be prepared to commit some time to the transition though. I installed darktable and gave it a try a couple days ago. There’s enough similarity to make you think it’ll be an easy transition, but it sure wasn’t for me. It took me an hour to do something that would have taken 5 mins in Lightroom. I’m glad to have it, and it seems like a powerful tool, so I’m not complaining, just sayin… be prepared to commit some time to learning where everything is.
Darktable doesn’t have AI denoise, and also doesn’t have camera profiles for fuji RAF files, just off the top of my head.
You can do manual denoise, and also, our two Fuji cameras work with Darktable just fine.
Yes, it does not have ML denoise, but there are very good reasons why you don’t want to have that in your raw pipeline. Sure, after raw development is fine, but denoise in a raw pipeline needs to maximise the signal-to-noise ratio. Machine learning denoising would introduce hallucinations, which are not real signal, and that’s why it’s best kept out of raw files.
Well, yes, some specific camera support features are missing, such as Fujifilm look-up tables, it still is the best raw editor I have used in my entire life and I can highly recommend it.
I will try it based on your second paragraph.
Are you sure the pipeline works that way? I know what you mean and it would seem like a huge oversight on their part to apply the denoise before the other edits. I would assume that increasing exposure, for example, would ignore the applied denoise rather than apply it overtop? If that wording makes any sense
Regardless I’ve used it to rescue photos I’ve taken on a nexus 4 over a decade ago, making them look like proper photos, and I find the feature so useful that it’s irreplaceable to me
The other feature is the AI content aware fill. In darktable, can you circle a piece of garbage on the ground and effortlessly remove it? Or do you have to do some manual clone stamping etc etc?
In a recent instance, a friend requested an album cover from a 3x2 image that I needed to expand to be 1x1. Can you tell that the left and right edges of this are not real? I don’t think I would have been skilled enough to pull this off without AI tools. https://f4.bcbits.com/img/a1356058193_10.jpg
Please understand me correctly: Machine learning does have its use as an image editing. but not in raw development. Sure, VFX are fine, but the goal of raw development is to make the files that the camera has put out look as good as possible. And for that, machine learning is inadequate, because again, hallucinations and other defects. Once you have processed your image with the raw development software, sure. Machine learning, denoising, expansion and other VFX will certainly work. But my recommendation is to keep it out of raw development, not for purist reasons, but because there’s genuinely no reason to use it, as we need to get precise results from raw first, and then we can add our VFX on top of that.
About the raw pipeline of Darktable, this is one of its greatest features. You can freely reconfigure it to suit your needs. By default, it uses a scene-referred workflow and you should really stick with that. But if you’re an advanced user, you can freely shuffle the modules around as you like, just like you can do in DaVinci Resolve.
Edit: For beginners, really stick with the modules that you’ll find in the different headers. Also, manipulate your modules starting from bottom to top, as this is the processing order for the modules.