The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

DPR: 907X and CFV II 50C sample gallery and impressions

docholliday

Well-known member
I've been using PS since 1994 and both C1 and LR since late 2006/2007. Phocus isn't bad at all and neither is C1. I use both, as well as LR. Out of the 3, LR is the chunkiest, slowest to run (performance), and won't handle high-end hardware for crap. Despite having 32 cores & 256GB of RAM in box and 64 cores & 512GB in another, it only wants to use 8 cores and is still chunky. Despite having all NVME drives and large SAS arrays cached with 64GB on-controller, line-rate caches, LR reads slow. And even with Quadro P5000s in the box, LR can't redraw without glitching or corrupting tiles.

Phocus and C1 on the other hand, take full advantage of the hardware and run smooth as butter. Photoshop has absolutely no lag or glitching. Adobe has barely advanced LR much in the code, just adding features. On the same computer, PS can open a 12GB file in < 2s, but LR can't render changes on a 24mp file without jerking. None of the UI is unusual for any of the programs, if one thinks about who a wet darkroom workflow and tools work and apply it to the process. LR has millisecond jerks and lag with 100mp files whereas C1 and Phocus handles the updates smoothly. The Heal brush is a laggy mess, but so is C1s equivalent. There's no substitute for PS when it comes to fine editing of the image.

Tethering in Phocus and C1 is 1000% more functional and doesn't experience the dropouts in the middle of a shoot like LR does requiring a full restart of LR. Actually, the most effective way to tether in LR is to not use LR for tethering, but rather the OEMs tether utility, a third-party tether, or FTP from camera to a watched folder and have LR pick up files from there.

Printing in anything but PS sucks, with weird color shifts and other oddities that pop up at the worst times. So I quit doing that all together. I'll just export a PSD (or TIFF) and print from PS where it does exactly as it should and actually reads complex color profiles correctly.

While LRs real shining star is the UI-flow, there are two things that C1 does do better: the ability to build custom panels and work with 4 monitors. LR just doesn't even work right with 2 monitors as it'll start to lag like crazy when the secondary preview is enabled.

Yet, each tool has it's purpose. If I'm doing quick, non-critical editing, I'll use LR with a MIDI controller for speed. For tethered, it's Phocus or C1, depending on which camera is being used. For "real" touchup, final edits, or (especially) printing, it's PS.
 

SrMphoto

Well-known member
I've been using PS since 1994 and both C1 and LR since late 2006/2007. Phocus isn't bad at all and neither is C1. I use both, as well as LR. Out of the 3, LR is the chunkiest, slowest to run (performance), and won't handle high-end hardware for crap. Despite having 32 cores & 256GB of RAM in box and 64 cores & 512GB in another, it only wants to use 8 cores and is still chunky. Despite having all NVME drives and large SAS arrays cached with 64GB on-controller, line-rate caches, LR reads slow. And even with Quadro P5000s in the box, LR can't redraw without glitching or corrupting tiles.

Phocus and C1 on the other hand, take full advantage of the hardware and run smooth as butter. Photoshop has absolutely no lag or glitching. Adobe has barely advanced LR much in the code, just adding features. On the same computer, PS can open a 12GB file in < 2s, but LR can't render changes on a 24mp file without jerking. None of the UI is unusual for any of the programs, if one thinks about who a wet darkroom workflow and tools work and apply it to the process. LR has millisecond jerks and lag with 100mp files whereas C1 and Phocus handles the updates smoothly. The Heal brush is a laggy mess, but so is C1s equivalent. There's no substitute for PS when it comes to fine editing of the image.

Tethering in Phocus and C1 is 1000% more functional and doesn't experience the dropouts in the middle of a shoot like LR does requiring a full restart of LR. Actually, the most effective way to tether in LR is to not use LR for tethering, but rather the OEMs tether utility, a third-party tether, or FTP from camera to a watched folder and have LR pick up files from there.

Printing in anything but PS sucks, with weird color shifts and other oddities that pop up at the worst times. So I quit doing that all together. I'll just export a PSD (or TIFF) and print from PS where it does exactly as it should and actually reads complex color profiles correctly.

While LRs real shining star is the UI-flow, there are two things that C1 does do better: the ability to build custom panels and work with 4 monitors. LR just doesn't even work right with 2 monitors as it'll start to lag like crazy when the secondary preview is enabled.

Yet, each tool has it's purpose. If I'm doing quick, non-critical editing, I'll use LR with a MIDI controller for speed. For tethered, it's Phocus or C1, depending on which camera is being used. For "real" touchup, final edits, or (especially) printing, it's PS.
I cannot reproduce your issues with LrC, but I am using a Mac. I do occasionally have issues with PS, though. Which LR version are you using?
 

Godfrey

Well-known member
LR Classic (latest version) on my Mac mini 6-core with 32G RAM and a 1T boot drive, 5T working drive connected via USB3, shows none of those problems at all. It's smooth and slick and outputs 50Mpixel files very rapidly.

That description sounds to me like an old version of LR running in a computer environment that it isn't configured for.

G
 

docholliday

Well-known member
I cannot reproduce your issues with LrC, but I am using a Mac. I do occasionally have issues with PS, though. Which LR version are you using?
LR Classic (latest version) on my Mac mini 6-core with 32G RAM and a 1T boot drive, 5T working drive connected via USB3, shows none of those problems at all. It's smooth and slick and outputs 50Mpixel files very rapidly.

That description sounds to me like an old version of LR running in a computer environment that it isn't configured for.

G
Nope, newest version. It's better than it was on LR7 and 8, but still a chunky POS. 250-500ms lag on slider changes to updates with no more than 8 cores moving. The render engine is the issue. It does move all 32 or 64 cores when exporting. Funny that InDesign, Illustrator, Photoshop (even ACR), Premiere, C1, Visual Studio, AutoCAD, and Resolve are all working with all cores just fine. It's just LR that won't use all cores with the render engine.
 
Last edited:

glennedens

Active member
I'm not having those problems at all with X1Dii, CFV50ii or H6D100 files. In my environment LR Classic typically uses 8-cores and it uses the GPU for rendering/adjustments (I can watch the GPU usage skyrocket during adjustments). My sliders are smooth and real-time within my perception window = it appears real-time. My system is Mac OS X, Mac Pro 2019 with 2x 5K Displays and 2x AMD Radeon Pro Vega II Duo 32 GB (although LR Classic only seems to use one of the GPUs).

I did have a problem back in December where LR Classic, all of a sudden got very very sluggish (it was just after I moved to the new Mac Pro). The solution was to rename a prefs file (you could also delete it) - User -> Library -> Preferences -> com.adobe.LightroomClassicCC7.plist. Quit LR Classic and rename the file something like "old.com.adobe........" or delete it if you are feeling lucky. Restart LR Classic and it was zippy once again. LR Classic will recreate that file when it starts up - a few settings were reset so, for example, I had to re-enable the Identity Plate, second display, etc. While the prefs file indicates LR V7 it is actually used by the latest version of LR - they just didn't change the name, sigh.

Doc are you running on a Windows PC or Mac? It certainly sounds like abnormal behavior?

Kind regards, Glenn
 

SrMphoto

Well-known member
Nope, newest version. It's better than it was on LR7 and 8, but still a chunky POS. 250-500ms lag on slider changes to updates with no more than 8 cores moving. The render engine is the issue. It does move all 32 or 64 cores when exporting. Funny that InDesign, Illustrator, Photoshop (even ACR), Premiere, C1, Visual Studio, AutoCAD, and Resolve are all working with all cores just fine. It's just LR that won't use all cores with the render engine.
It seems that something is funky with your setup. You could contact Adobe support because the behavior that you observe is atypical.
When I process 180MP files from Leica SL2, there is absolutely no lag on slider changes.
 

docholliday

Well-known member
I'm not having those problems at all with X1Dii, CFV50ii or H6D100 files. In my environment LR Classic typically uses 8-cores and it uses the GPU for rendering/adjustments (I can watch the GPU usage skyrocket during adjustments). My sliders are smooth and real-time within my perception window = it appears real-time. My system is Mac OS X, Mac Pro 2019 with 2x 5K Displays and 2x AMD Radeon Pro Vega II Duo 32 GB (although LR Classic only seems to use one of the GPUs).

I did have a problem back in December where LR Classic, all of a sudden got very very sluggish (it was just after I moved to the new Mac Pro). The solution was to rename a prefs file (you could also delete it) - User -> Library -> Preferences -> com.adobe.LightroomClassicCC7.plist. Quit LR Classic and rename the file something like "old.com.adobe........" or delete it if you are feeling lucky. Restart LR Classic and it was zippy once again. LR Classic will recreate that file when it starts up - a few settings were reset so, for example, I had to re-enable the Identity Plate, second display, etc. While the prefs file indicates LR V7 it is actually used by the latest version of LR - they just didn't change the name, sigh.

Doc are you running on a Windows PC or Mac? It certainly sounds like abnormal behavior?

Kind regards, Glenn
It's Windows...no apple products here. Prefs file does nothing, as the box has been recently reinstalled (a driver that I was writing killed the OS beyond recovery). It's not a fluke, it's a design flaw. Another funny point is that I also write code that handles very large 3D radiographs (tomography) and my rendering is dead smooth, despite the massive amounts of data. My code runs in both Direct3D12 and OpenGL, with full SIMD vector handling, but I can see from the debugger when hooked to LR that it has some sort of artifical limiter preventing it from scaling up like should. It's just LR that doesn't *scale* well with high-end or large-scale hardware. ACR does, so it's obviously the LR version of the engine that is behind.

The main box is running Win10 2004 with 256GB DDR4 ECC on 2x Xeon E5-2698v3 and 4x Quadro P5000 on 4x 27" monitors with 4x 1TB NVME, 8TB SAS SSD in RAID50 with cache for working data, and 40TB 15K SAS raid for warm data. All drives move at least 10GB/s of data across the pipe. Shouldn't have any issues with running anything...and it doesn't, except for LR. Even AutoCAD renders massive layered files in 3D fluidly with not a hint of lag.

I've posted support requests to Adobe with debugger logs and other developer level data. They've always just responded with either 1) it's normal or 2) we'll send the dev team a note.
 

glennedens

Active member
Doc, I've received the same responses from Adobe - I found the prefs file hangup via profiling traces. so I feel your pain - Adobe was no help. You clearly have more than enough horsepower!

To go back on topic (sorta) I find Phocus and LR Classic similar in performance with an advantage for Phocus and I do use tethering in the studio, and there it is no contest, Phocus and Phocus Mobile win.
 

docholliday

Well-known member
Doc, I've received the same responses from Adobe - I found the prefs file hangup via profiling traces. so I feel your pain - Adobe was no help. You clearly have more than enough horsepower!

To go back on topic (sorta) I find Phocus and LR Classic similar in performance with an advantage for Phocus and I do use tethering in the studio, and there it is no contest, Phocus and Phocus Mobile win.
Yeah, their support is an absolute joke. If it wasn't for Photoshop, InDesign, and Illustrator, I'd probably left their ecosystem when Dreamweaver took a back seat to CMS style site building. They've even told me to "just use Photoshop" and "LR is performing as best as it can". Sorry, taking 1s to render one image is not best when I can render 250 frames of 48-bit grayscale in 1/5 time! I even cached the catalog file on a ramdrive to see if it improved. It didn't. And that catalog was moving 62GB/s with the image data cached moving 50GB/s. They couldn't explain why there were a lot of processor time spent in a wait state...hmmm.

Anyways, yes, tethering with C1 and Phocus is amazing compared to LR. And Phocus Mobile is so much better than Capture Pilot performance-wise, except that C1 allows remote viewing/scoring with anything web-based, not just apple stuff, and that is nice when a client is viewing on a 40" touch monitor away from the set!
 

mristuccia

Well-known member
Hello Doc,

I've a Hackintosh (Mojave 10.14.6) based on an old i7-4790k CPU (4 physical cores which turn into 8 virtual threads with HT on) at 4.5GHz, an AMD Vega 64 GPU and 32GB RAM. With this hardware things like "taking 1s to render one image" seem to me AUs (astronomical units) far from my experience. I suspect there is something wrong in the compatibility of LrC with your environment.

I'm sure you've already done this, but did you enable all possible GPU acceleration in the preferences => performance menu? (see my settings in the attached screenshot)
Moreover, it could be also possible that the issue lies in the oversized hardware you're using. Did you try setting the CPU affinity to only 4 or 8 cores for LrC, thus simulating an undersized environment?
Just some brainstorming here, trying to find the reason of your "personal" issues.

Screenshot 2020-09-14 at 11.01.20.jpg
 

docholliday

Well-known member
Hello Doc,

I've a Hackintosh (Mojave 10.14.6) based on an old i7-4790k CPU (4 physical cores which turn into 8 virtual threads with HT on) at 4.5GHz, an AMD Vega 64 GPU and 32GB RAM. With this hardware things like "taking 1s to render one image" seem to me AUs (astronomical units) far from my experience. I suspect there is something wrong in the compatibility of LrC with your environment.

I'm sure you've already done this, but did you enable all possible GPU acceleration in the preferences => performance menu? (see my settings in the attached screenshot)
Moreover, it could be also possible that the issue lies in the oversized hardware you're using. Did you try setting the CPU affinity to only 4 or 8 cores for LrC, thus simulating an undersized environment?
Just some brainstorming here, trying to find the reason of your "personal" issues.

View attachment 176137
Nope doesn't work that way. There's some artifical barrier in the code that's restricting the core usage. Setting affinity to 4 cores and it runs on 4 cores. Setting it to 8 and it runs 8. Setting it to anything higher and it runs 8. GPU acceleration has nothing to do with the threads of the process. It enables or disables GPU based (CUDA or OpenCL) math processing.

And "could be also possible that the issue lies in the oversized hardware you're using" is the problem. It's the failure of the software to scale properly with hardware. Being that the core of LR and ACR is written in C++, it should scale as long as the process design doesn't cause a lot of context switches, which it seems to do. Those switches cause cores to go into a wait state. When exporting, where each thread is assigned one image to process, it works fine since that is i/o based and won't cause a lot of context switches in the process.

ACR uses all cores, C1 uses all cores, Phocus uses all cores, etc. so it has something to do with either 1) the way the LR develop engine works that's kludgy or 2) the UI context holding the processes threads back. Either way, C1 and Phocus operate in real time and scale accordingly to the hardware they're run on.
 
Last edited:

FloatingLens

Well-known member
And "could be also possible that the issue lies in the oversized hardware you're using" is the problem. It's the failure of the software to scale properly with hardware. Being that the core of LR and ACR is written in C++, it should scale as long as the process design doesn't cause a lot of context switches, which it seems to do. Those switches cause cores to go into a wait state. When exporting, where each thread is assigned one image to process, it works fine since that is i/o based and won't cause a lot of context switches in the process.
Sounds to me that your process is I/O bound whereas it should be compute bound judging from the description of your use case. This is not necessarily a matter of too many context switches.
 

docholliday

Well-known member
Sounds to me that your process is I/O bound whereas it should be compute bound judging from the description of your use case. This is not necessarily a matter of too many context switches.
It sounds like that to me too. However, it's not the hardware causing it, as the LR processes barely move past 5-10% i/o usage. And yes, there's a lot of context switching going on too. The debugger shows that clear as day.

Anyways, my post wasn't about trying to "fix this", as I've given up on LR as a serious use tool in the studio. After numerous logs and attempts to get useful info to Adobe, they don't seem to care and just keep selling. I've got my own software to write and jobs to shoot. C1+PS or Phocus+PS does the job properly and that's enough of my time invested. It's not my codebase to maintain nor is it my customers to make happy!
 
  • Like
Reactions: spb

Godfrey

Well-known member
Well, obviously something is wrong with something in the software ... whether it's driver interactions with the LR libraries, the installation, etc etc. If I was interested enough and it was my system, I'd simplify everything on a clean slate, do a fresh installation, and see what happened. You have such a complicated system, with so many displays and so many other graphics packages installed, that any simple conjecture about what might be the problem is mostly worthless.

But if you have already come to your working setup and you're happy with it, why worry or complain about it? That's just a waste of energy and time.

I'm not particularly happy with Adobe's policies and practices in recent years either. At the present moment, on my system and for my purposes, it is the best operating software I have for this stuff. I don't have anything other then Lightroom Classic and DNG Converter from Adobe installed in my system, along with the required ecosystem nonsense, and when I find something else that does what I want with similar ease and facility, I'll dump LR too.

The more I work with Phocus (and with Photos supplemented by RAW Power), the more inclined I am to move my processing workflow away from LR and Adobe entirely. But there are a couple of specific things (in the printing area) that LR does better than anything else for me. The time is coming... :)

G
 

mristuccia

Well-known member
Nope doesn't work that way. There's some artifical barrier in the code that's restricting the core usage. Setting affinity to 4 cores and it runs on 4 cores. Setting it to 8 and it runs 8. Setting it to anything higher and it runs 8. GPU acceleration has nothing to do with the threads of the process. It enables or disables GPU based (CUDA or OpenCL) math processing.

And "could be also possible that the issue lies in the oversized hardware you're using" is the problem. It's the failure of the software to scale properly with hardware. Being that the core of LR and ACR is written in C++, it should scale as long as the process design doesn't cause a lot of context switches, which it seems to do. Those switches cause cores to go into a wait state. When exporting, where each thread is assigned one image to process, it works fine since that is i/o based and won't cause a lot of context switches in the process.

ACR uses all cores, C1 uses all cores, Phocus uses all cores, etc. so it has something to do with either 1) the way the LR develop engine works that's kludgy or 2) the UI context holding the processes threads back. Either way, C1 and Phocus operate in real time and scale accordingly to the hardware they're run on.
The point is, even with 4 physical cores my LrC isn't having your issues. That means: good LrC performances don't necessarily depend on the number of CPU cores it is using. Of course having a lot of cores and being able to use them is an advantage. But here the problem seems to be different.
 

docholliday

Well-known member
Well, obviously something is wrong with something in the software ... whether it's driver interactions with the LR libraries, the installation, etc etc. If I was interested enough and it was my system, I'd simplify everything on a clean slate, do a fresh installation, and see what happened. You have such a complicated system, with so many displays and so many other graphics packages installed, that any simple conjecture about what might be the problem is mostly worthless.

But if you have already come to your working setup and you're happy with it, why worry or complain about it? That's just a waste of energy and time.

I'm not particularly happy with Adobe's policies and practices in recent years either. At the present moment, on my system and for my purposes, it is the best operating software I have for this stuff. I don't have anything other then Lightroom Classic and DNG Converter from Adobe installed in my system, along with the required ecosystem nonsense, and when I find something else that does what I want with similar ease and facility, I'll dump LR too.

The more I work with Phocus (and with Photos supplemented by RAW Power), the more inclined I am to move my processing workflow away from LR and Adobe entirely. But there are a couple of specific things (in the printing area) that LR does better than anything else for me. The time is coming... :)

G
It did the same on the fresh install as the first program installed, and that *is* the base hardware! I typically install with all drive clusters and their controllers out of the box, and only one video card + one 10G NIC enabled. The first thing I tried was to install on a fresh, blank drive since it was easy as this box has no internal drives and all hotswap bays. As a dev with a hardware/electronics background, I have the tools and the background to do the non-consumer detailed investigation, but this simply came down to one of those "can't fix it from the user side with speculation" moments and the resulting instrumentation shows that it's not on this end.

The only thing that LR truly did good was the organization system, not the editing side. But, that's all habit anyways. C1's organization felt weird at first, but only because I'd gotten used to LR for cataloging. C1 had a horrible organization (or total lack of it) in it's early days. Phocus to me is somewhat in between C1 and LR as a whole, better in some, lacking in other. But, I don't see Phocus is meant to be anything other than a RAW editor dedicated to one platform with the extra features as simply icing on the cake, and above all, it's free. It's like Canon and DPP. You really can't complain because it didn't cost you anything and at least the OEM gave you useful tools to make the most of the hardware you purchased.
 

docholliday

Well-known member
The point is, even with 4 physical cores my LrC isn't having your issues. That means: good LrC performances don't necessarily depend on the number of CPU cores it is using. Of course having a lot of cores and being able to use them is an advantage. But here the problem seems to be different.
That's the definition of not scaling well, which is the problem with LR. :)
 

Godfrey

Well-known member
It did the same on the fresh install as the first program installed, and that *is* the base hardware! I typically install with all drive clusters and their controllers out of the box, and only one video card + one 10G NIC enabled. The first thing I tried was to install on a fresh, blank drive since it was easy as this box has no internal drives and all hotswap bays. As a dev with a hardware/electronics background, I have the tools and the background to do the non-consumer detailed investigation, but this simply came down to one of those "can't fix it from the user side with speculation" moments and the resulting instrumentation shows that it's not on this end.

The only thing that LR truly did good was the organization system, not the editing side. But, that's all habit anyways. C1's organization felt weird at first, but only because I'd gotten used to LR for cataloging. C1 had a horrible organization (or total lack of it) in it's early days. Phocus to me is somewhat in between C1 and LR as a whole, better in some, lacking in other. But, I don't see Phocus is meant to be anything other than a RAW editor dedicated to one platform with the extra features as simply icing on the cake, and above all, it's free. It's like Canon and DPP. You really can't complain because it didn't cost you anything and at least the OEM gave you useful tools to make the most of the hardware you purchased.
Well, whatever. It isn't working for you, so I wouldn't waste the energy on it if you have already found other solutions that do. It works beautifully for me and for most others I know, so eh? Who knows?

G
 

SrMphoto

Well-known member
That's the definition of not scaling well, which is the problem with LR. :)
LrC scales fine with 8 cores/16 threads (e.g., building previews). Out of curiosity: for which LrC operations do you need to use many cores?
With active 8 cores (or less), you should not see any delay with sliders. If you are in any way using Adobe DNG Converter, make sure it is updated as it is not part of the Creative Cloud.
 

mristuccia

Well-known member
That's the definition of not scaling well, which is the problem with LR. :)
Not scaling well has a meaning if there is the necessity for scaling. And this is not the case.
The current LrC version does not need to scale up to more than 8 cores, development and sliders are already real-time with 4 cores.
Then your performance problem lies elsewhere and scaling up will probably not solve your problem.
 
Last edited:
Top