The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Hasselblad: Phocus Or Lightroom

Godfrey

Well-known member
Thanks for the lesson. I've got a rough idea how this stuff works. The difference in clipping indication between the camera and Phocus is huge (at least 0.5EV). So to take full advantage of the range I try to ETTR, but with the X1D this is involves far more guesswork and crossed fingers than with far more modest cameras. So I end up constantly underexposing.

At least that's my experience.
Following on...

Do be aware that "ETTR" is a rather simplistic rule-of-thumb that was important at the beginning of the digital capture era in photography, when pixel counts were low and sensor dynamic range was more limited than now. The idea is that a digital capture sensor is a linear device and the eye's perception of light and shadow is not linear, it more naturally follows a logarithmic scale, such that when you take the linear input from a digital capture sensor and apply a gamma curve to render perceived differences in light and shadow as the eye needs to see them, if you capture evenly across the sensor's dynamic range, the gamma correction throws away a good bit of the data in the curve, more of it in the upper half of the range than the lower half, so it's better to bias the exposure to give more data in the upper part of the range in order to lose less upon correction. You should try to never hit saturation because all tonal distinction is lost there.

With today's much higher resolution, much greater dynamic range (14-15bit depth!) sensors, you have far more choice as to what "best exposure" for a given scene is, with tons of overhead at both ends of the dynamic range to work with. Correct exposure is not just exposing as much as possible without going to saturation ... you have choice in the matter, and there are subtleties in the relationships of tonal values that you can choose beyond the simple mantra of "ETTR".

In the little test example I did this morning to write up my calibration test clearly, the best match in the histogram of a tri-tone target came at .6EV more exposure than the hand-held and internal meter indicated. Checking the raw file in Lightroom, that's still FAR from being near the saturation limits that ETTR implies ... I could take the +0.6EV or even a +1.2EV exposure into the Develop module and add 2.5EV MORE exposure before seeing a saturation flag pop up, due to the raw data being nicely synthesized to 14bits per component.

G
 

SrMphoto

Well-known member
Thanks for the lesson. I've got a rough idea how this stuff works. The difference in clipping indication between the camera and Phocus is huge (at least 0.5EV). So to take full advantage of the range I try to ETTR, but with the X1D this is involves far more guesswork and crossed fingers than with far more modest cameras. So I end up constantly underexposing.

At least that's my experience.
The clipping and histogram that you see in most cameras is based on the JPG version of the image not on the RAW data. That is the reason why guesswork is often involved when doing ETTR.
Getting the most out of the sensor is nice, but preserving highlights is more important. In problematic cases, I bracket, just in case.
 

docholliday

Well-known member
ETTR was/is mostly useful only when the scene exceeds the capabilities of the capturing device. When the whole range of darkest to lightest *that you want detail captured in* exceeded the range of the capture device, it was logical to bias the exposure to the right (highlights). It is easier to recover blown highlights as it's data that had some something recorded and is capable of being darkened. To recover crushed shadows, which have no data recorded, the processor would have to "create" data. Usually, this is done via interpolation of surrounding pixels, but since it is mathematically calculated, it can result in noise on a color image since it is unknown where each of the color channels should truly be located.

The proper way to do it would be to use a 1° spot meter, measure the EV of the darkest point one wants detail in and the brightest highlight where detail is needed. If that fits into the range the capture sensor can record in one shot, then there is no need to ETTR or ETTL. One would test their sensor to determine what the true range (not the "calculated" range is) before this. For most modern MF sensors, that's somewhere in the 13.5-16 EV range and should be able to capture most scenes without bright/specular light properly. Even then, most bright light sources, like the sun or a direct bulb, wouldn't need detail, so you could let it just blow out.

Forceably crushing the image against the right side of the histogram doesn't do much good if the image range can be captured, as you can destroy fine details in the now crushed areas. Once the whole range is captured, one can then "expand" the image in post to reach full capacity by using the contrast/highlight/shadow sliders if they wish. ETTR has it's uses, but has become such a "buzz phrase" that everybody's been spewing it as a must-do without understanding the where and why it's needed.

ETTR was actually used in the film days, just not the same name. For slide film, one had to expose carefully to not blow out highlights where detail was wanted - the film would just be clear and resulted in no detail. It was also not recoverable if done too far. Positive films had a very narrow range and rarely fit a full scene's dynamic range into one shot. However, for negative films, it was usually good to overexpose the scene by 2/3 stop or so, as it usually gave a bit more detail in the neg and allowed for printing with a hair more contrast and richer colors.
 

SrMphoto

Well-known member
There seems to be a lot of incorrect information about ETTR shared in this thread. However, I feel not comfortable hijacking this thread.
 

Godfrey

Well-known member
There seems to be a lot of incorrect information about ETTR shared in this thread. However, I feel not comfortable hijacking this thread.
Start a new thread and do tell us what you feel is 'incorrect information about ETTR' ... Why else do we have a photographic discussion board?

G
 

bab

Active member
Hmm
Phocus does a few things that can’t be done in LR or PS, it’s curve ability is superior to any PPP, version two and three deal with B&W images better than the newer ver 5 if you shoot B&W.
camera Raw comes with PS (similar to LR). PS prints better!
You can send FFF file from Phocus to PS and only then will PS recognize the attached data this doesn’t transfer the same way when sending a Tiff file to PS.
PS takes the file to the next level in fact adding mid tone contrast, removing color cast and using selective color is worth the trip alone.
 

ErikKaffehr

Well-known member
Exactly what "content aware processing" are you referring to? I've never seen anything that was content aware in LR, and the only supposedly content-aware processing I see in the LR Classic latest rev is the ability to quote-unquote "intelligently uprez" image files (which I use the quote-unquote notation for because I see NO difference between using it and doing the uprez myself manually).

Far as what I experience, LR is a straightforward and simple image processing tool. Every action in it, outside of using scripts and plug-ins, is easily shown to be pretty simple image processing value adjustments.

G
Hi Godfrey,

Once 'highlights' or 'shadows' sliders are used above 50% Lightroom applies some local adaption methods to maintain local contrasts. That effect cannot be achieved with curves. HDR programs used to have 'local adaption' but that resulted in 'grungy results', causing HDR to have bad reputation.

Something like ten years ago, Jeff Schewe has shared an article describing the algorithms involved. I may be able to dig up that link, but it may be a lot of work.

But, you can test yourself. Start with an image that has nice highlights, correctly exposed to the right, like a cloudscape. Now open it in Photoshop and adjust it with tone curves. The clouds will be flat. Open the same image in Lightroom/ACR and darken sky using the 'highlights' slider. You get dramatic clouds.

The feature was introduced in LR/ACR around 2012. At that time I experimented with using HDR tone mapping in Photoshop, where I would have 'local adaption' on a layer and have normal. non HDR processing' on a layer below and mixing the layers to achieve a natural look.

A couple of years later, Adobe introduced the new tone mapping methods in LR/ACR. They called it 'content aware' processing and it was a huge difference to the old options.

A few years ago, Lightroom added luminosity masking to local enhancement tools, giving users much more controls.

Just to say, these improvements are real. That said, it is quite possible that they don't match your workflow.

Best regards
Erik
 

Godfrey

Well-known member
Hi Godfrey,

Once 'highlights' or 'shadows' sliders are used above 50% Lightroom applies some local adaption methods to maintain local contrasts. That effect cannot be achieved with curves. HDR programs used to have 'local adaption' but that resulted in 'grungy results', causing HDR to have bad reputation.

Something like ten years ago, Jeff Schewe has shared an article describing the algorithms involved. I may be able to dig up that link, but it may be a lot of work.

But, you can test yourself. Start with an image that has nice highlights, correctly exposed to the right, like a cloudscape. Now open it in Photoshop and adjust it with tone curves. The clouds will be flat. Open the same image in Lightroom/ACR and darken sky using the 'highlights' slider. You get dramatic clouds.

The feature was introduced in LR/ACR around 2012. At that time I experimented with using HDR tone mapping in Photoshop, where I would have 'local adaption' on a layer and have normal. non HDR processing' on a layer below and mixing the layers to achieve a natural look.

A couple of years later, Adobe introduced the new tone mapping methods in LR/ACR. They called it 'content aware' processing and it was a huge difference to the old options.

A few years ago, Lightroom added luminosity masking to local enhancement tools, giving users much more controls.

Just to say, these improvements are real. That said, it is quite possible that they don't match your workflow.

Best regards
Erik
Thanks for your explanation.

Hmm. Perhaps the issue is in terminology ... I don't consider "local adaptation" to be "content awareness". Content awareness means to me that the image processing app recognizes the content of the image area and acts upon it accordingly; local adaptation means that it sees the local area you are applying an edit to and acts in keeping with a reasonable guideline for the blending of the areas, according to the edit being applied. The first implies a level of content recognition, where the second just looks at the interaction of local values. I implemented algorithms in the local adaptation type of automation myself, about middle 1980s, when I was doing image processing for NASA.

Whatever marketing gobbledegook naming Adobe wants to apply to it, that's how I perceive these two different editing automation principles.

My goals in evaluating and setting exposure at capture time are to get within a range that means most of my edits are only a few percentage points off the baseline norm ... using sliders wacked above the 50% point is a true rarity for my photographs. For me, that kind of extreme adjustment indicates either I was sloppy in my exposure technique or was trying to capture a scene which was technically out of range for the recording medium ... I'd choose not to shoot such scenes, normally. :D

What is "luminosity masking"? I've not heard that term before.

Regardless, interesting to hear how Adobe has developed the editing tools in LR even if this particular set of enhancements is of little particular consequence to my work. I have been experimenting with a few completely different image processing software packages and one of my criteria for comparing them is that I can bring in my captures and achieve the same results as I have been with LR (LR Classic now). So far, I have be able to replicate my results in LR to very high fidelity with all the different tools I've tried. That simplifies the evaluation of the apps to just what tools and operations I need to use, and how to get to them efficiently, rather than having to consider each of their fundamental range of operation and editing capabilities.

G
 

SrMphoto

Well-known member
<snip>
What is "luminosity masking"? I've not heard that term before.
<snip>
Luminosity masking is a technique to create a mask based on scene luminosity. Mostly done in PS, it is now possible to do it in LrC as well.
In Lightroom Classic -- when using a Graduated Filter, Radial Filter, or Adjustment Brush -- at the bottom of the panel, there is now a Range Mask option to refine the generated overlay based on the luminance or color.
Here is a short introduction:
https://www.capturelandscapes.com/luminosity-masks-lightroom/
 

ErikKaffehr

Well-known member
Thanks for your explanation.

Hmm. Perhaps the issue is in terminology ... I don't consider "local adaptation" to be "content awareness". Content awareness means to me that the image processing app recognizes the content of the image area and acts upon it accordingly; local adaptation means that it sees the local area you are applying an edit to and acts in keeping with a reasonable guideline for the blending of the areas, according to the edit being applied. The first implies a level of content recognition, where the second just looks at the interaction of local values. I implemented algorithms in the local adaptation type of automation myself, about middle 1980s, when I was doing image processing for NASA.
At the time Adobe introduced those features, Jeff Schewe shared a link to the article describing the algorithm used. I am pretty sure it was this one:

Whatever marketing gobbledegook naming Adobe wants to apply to it, that's how I perceive these two different editing automation principles.

My goals in evaluating and setting exposure at capture time are to get within a range that means most of my edits are only a few percentage points off the baseline norm ... using sliders wacked above the 50% point is a true rarity for my photographs. For me, that kind of extreme adjustment indicates either I was sloppy in my exposure technique or was trying to capture a scene which was technically out of range for the recording medium ... I'd choose not to shoot such scenes, normally. :D

What is "luminosity masking"? I've not heard that term before.
A mask that is based on luminosity. The transparency varies with 'L' channel.
Capture.PNG
Regardless, interesting to hear how Adobe has developed the editing tools in LR even if this particular set of enhancements is of little particular consequence to my work. I have been experimenting with a few completely different image processing software packages and one of my criteria for comparing them is that I can bring in my captures and achieve the same results as I have been with LR (LR Classic now). So far, I have be able to replicate my results in LR to very high fidelity with all the different tools I've tried. That simplifies the evaluation of the apps to just what tools and operations I need to use, and how to get to them efficiently, rather than having to consider each of their fundamental range of operation and editing capabilities.

G
 
Top