The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Why did you go back to full frame DSLR?

To Doug's point, I think it would be helpful if manufacturers and testers were more clear about their dynamic range methodology. It doesn't have to be too complex; we just need a standard that makes sense.

DR is always measured with some notion of acceptable noise. You can always add more gain to the shadows—the question is, at what point does the noise become objectionalbe?

People call DXOmark innacurate. They aren't; they've just chosen a S/N standard that is much lower than what most photographers would find acceptable. As a result, they tell me my camera captures 14 stops of DR. In my experience, it captures 10 stops. Maybe 12 optimistically.

This is just a disagreement over standards. If you were using my camera for surveillance, you could indeed dig useable details from shadows in a 14 stop DR image. You could not get shadows that look nice in any conventional sense.
 

dougpeterson

Workshop Member
To Doug's point, I think it would be helpful if manufacturers and testers were more clear about their dynamic range methodology. It doesn't have to be too complex; we just need a standard that makes sense.

DR is always measured with some notion of acceptable noise. You can always add more gain to the shadows—the question is, at what point does the noise become objectionalbe?

People call DXOmark innacurate. They aren't; they've just chosen a S/N standard that is much lower than what most photographers would find acceptable. As a result, they tell me my camera captures 14 stops of DR. In my experience, it captures 10 stops. Maybe 12 optimistically.

This is just a disagreement over standards. If you were using my camera for surveillance, you could indeed dig useable details from shadows in a 14 stop DR image. You could not get shadows that look nice in any conventional sense.
While I agree with your overall point, I do think it still overlooks the quality-of-noise and quality-of-signal issue that I tried to lay out in detail.

Let me just ask directly: do you believe that with a given signal-to-noise ratio that all signal and all noise is equal?

As counter examples to that theory:
- More accurate (narrow band) color discrimination does not show in a s/n analysis, but aids greatly in highlight recovery.
- Color drift in shadows has minimal effect on s/n but makes shadow content (like the shadow side of dark green foliage) far less useable
- An image with clumpy shadow noise of a given s/n ratio will often require a larger insertion of post-processing grain to be pleasing than an image with gaussian noise of the same s/n ratio
- stuck pixels have a large impact on s/n (depending on the outlier-exclusion methods used) but have minimal impact on the final image if they are mapped out properly in post
 
Let me just ask directly: do you believe that with a given signal-to-noise ratio that all signal and all noise is equal?
Not at all. I agree that a standard would have to include more than a simplified s/n number.

And this kind of standard is never perfect. Two sensors that measure similarly might end up looking different in real life. But a good metric can get you close.
 
Last edited:

dougpeterson

Workshop Member
Not at all. I agree that a standard would have to include more than a simplified s/n number.

And this kind of standard is never perfect. Two sensors that measure similarly might end up looking different in real life. But a good metric can you close.
Agreed. A better/broader/more-photo-use-oriented standard could get us closer than the current engineering-oriented definition of dynamic range.

Though as you say, even with a better standard, you couldn't fully capture the look/feel of an image with a number.
 

Ken_R

New member
Also, it is also not only important the amount of stops in the dynamic range (given an equal standard of measurement) but also the exposure latitude which what I think matters to photographers. The ability to make corrections or exposure changes in an entire image or parts of an image. It can be pushing up shadows or pulling down highlights. Like it was mentioned it is a very hard thing to standardize.

Here is a table I found which shows a very important aspect of where in the exposure range is that latitude located. As you can see, film (color neg) still reigns supreme in highlight recovery potential even though if you measure, in stops, the latitude of some of the digital cameras they get very close to matching the total exposure latitude of film. One can say that digital rules the shadows and film the highlights. (* I don't now the source of the table or the methodology used to get the data)
 

fotografz

Well-known member
Cell phones are the numero-uno cameras of choice for most users right now. The hurdle as I see it is two-fold: 1) Cell phones have gotten as good as P&S cams of 2 generations ago and include video; 2) Most users display their images digitally now, and if they print, they're building the little books online or at the one-stop shops like Kinko's or similar -- and here 4mp is more than adequate.

Then it gets worse, factor in the current state of image processing apps, and the cell phone is very attractive option. My wife and daughter have gone to classes on cell phone processing -- forget Hipsta and Instagram, you can manipulate any photo and upload it to wherever via PhotoWizard.

As a family, we put out a calendar every year and give them to friends. Mostly it's places we've traveled to and each of us (5 total) contribute at least one of the images. Not surprisingly, I usually supply the extra image or two -- and this year one of those was taken with my iPhone. I now use my iPhone instead of a P&S, and have got to say that they print up GREAT at 8x10 calendar size...

I am now printing less and less, but am supplying digital images and output services more and more. I see a future with the need for big cameras and more pixels diminishing even further. Sorry...

Story #2. I had lunch with a fashion photographer several months back whose images are generally printed large. He always shot MF. He'd cull his images and present a set to his client. He said invariably a client will choose one of his less than technically perfect captures because of something else they like in it -- and they never seem to notice or simply cannot see the nits. So he decided to shoot with a DSLR on a shoot. His client loved the finished product and of course it was a ton easier on the photographer. His conclusion is his clients couldn't see the differences, or if they could see them, they didn't say anything or didn't care --- and thus he now shoots with a 36MP DSLR because it's so much more convenient for him.

Story #3. Marc and I met on one of the original Leica internet forums about 15 years ago (seriously!). At the time, digital was in it's infancy -- and we all knew film was far superior. (And we both still love it I am sure -- I don't shoot it anymore, don't know about Marc.) A short couple of years later, Marc was selling me his 4MP Canon 1D as he upgraded to a higher MP Kodak I believe. And it wasn't long before he was shooting a 16MP DB on MF. Anyway, point is at about this time we both started predicting the end of film. Back then we both figured most films would be gone by 2020, except maybe Tri-X (seriously). We were probably a little over-optimistic about film's lifespan. And now we may be similarly overly optimistic about HR digital's future...
Nope, haven't shot film in years now Jack.

Yep, cell phones are murdering the P&S. DSLRs seem to be holding on a bit better because some people are used to a bit of creative effects from the various lenses ... but I'd guess that'll be short lived. Unfortunately, as of now, mirrorless isn't taking up the slack:

Mirrorless Cameras Lose Their Shine? NYC and USA Today report with Panasonic-Olympus analysis. | 43 Rumors

CIPA report on 2012 sales in Japan: Fewer Mirrorless cameras produced. | 43 Rumors

Unless some revenue stream replaces that from P&S and diminished DSLR sales, (and as of now lackluster mirrorless), then it'll be interesting to see how it'll effect prices of what is left.

The "tired" factor mentioned is a real possibility ... a notion also touched upon in this blog post, which is a bit over the top IMO, but an interesting take on current events:

The Visual Science Lab / Kirk Tuck: Has the bubble burst? Is that why camera sales in N. America are down by 43%?

The X factor here is some as of yet unknown technology totally changing the game .... and like the death of the P&S, it may come out of left field from other than a camera company.

As of now, I'm perfectly fine where I'm at, and haven't bought a new 35mm DSLR lens for almost four years, and three years for the S2 (except the upgrade of the S to CS). I cancelled the M240 because I think the images are homogenized white bread (IMO)!. I will NOT return to a manual lens on an AF DSLR to squeeze a bit more IQ out of a sensor with more resolution than I need 95% of the time :banghead:

Commercial studio work is all but history, I cut weddings to a handful a year, and only if they are high ticket. I'm working more with lighting, so the S2 with CS lenses is an important difference from other choices.

Content isn't a new focus for me, it has always been the focus. Getting off the gear train, the fussing with every little aspect to get the most from it, just makes keeping that focus easier to accomplish.

Now lighting gizmos ... that's a whole other subject ... :ROTFL:

- Marc
 

Jack

Sr. Administrator
Staff member
I cancelled the M240 because I think the images are homogenized white bread (IMO)!.
- Marc
OMG! PLEASE do NOT post that in the L forum as some form of nuclear fission will result!


:ROTFL::ROTFL::ROTFL:
 

docmoore

Subscriber and Workshop Member
And I kept waiting for Marc to post some examples that would make me
desire one...I have turned down four offers over the past few months for
an immediate body...in stock ready to ship.

Now if they will just put the shutter and improved RF in a M9P for me.....

I actually picked up a refurb Silver M8.2 with 2k shutter exposures to lessen my impulse to buy one......

Bob
 
Also, it is also not only important the amount of stops in the dynamic range (given an equal standard of measurement) but also the exposure latitude which what I think matters to photographers. The ability to make corrections or exposure changes in an entire image or parts of an image. It can be pushing up shadows or pulling down highlights. Like it was mentioned it is a very hard thing to standardize.
This is just the practical result of what we're talking about. It's all a result of dynamic range.

It would be best to avoid the term "highlight recovery." In digital there's no such thing. A digital sensor captures fully separated detail up to the clipping point, and then past that ... nothing.

Recovery is an illusion presented by the raw converter. The default settings cut off some of the highlights and shadows. But those cutoff points are really just arbitrary, based on what the creator of the setting thinks will make a typical image look good.
 

torger

Active member
Yes I've coded some highlight "recovery" algorithms, and what you do is not recover, you reconstruct ie guess values, make something that (hopefully) looks good. "Highlight recovery" is a misleading term, but it sounds better than "highlight guessing".

Standard robust raw conversion as found inside cameras typically set the clip point to when the first channel clips, which typically is green. So the in-camera jpeg will show white when the first channel clips, this means that there's a lot of highlight information not shown, but all highlights you see are made from complete RGB information, no risk for funky highlights. What Lightroom, Capture One etc can do is to show something as long as there is at least one channel that is not clipped, but this means that you don't have full color information so you make some guesses or simplifications. The simplest way is to desaturate highlights towards neutral gray, so you have structure in the highlight but no color. Since the highlight is generally surrounded by color the you often get an illusion that the highlight has color anyway.

Most raw converters hide what the underlying raw actually contains and do some preprocessing before you even touched the first slider. This makes the software more user-friendly and "film-like", but hides from the user how digital photography actually works. Many do not know about this and think they see the "raw thruth", and from this there's a lot of myth concerning highlight recovery and non-linear response and other things that simply don't exist in digital.

(Oh well, non-linear response is coming, for cell phone cameras there's recently been a release of a sensor that compresses highlights at capture and thus gains dynamic range.)

In-camera histograms differ from manufacturer/model how they choose to show clipping. Some leave some space above it, that is don't show the real clipping point. The histograms may be luminance only and thus show an RGB product and not show individual channel clipping. Learning how the histogram works for your own camera is worthwhile if you often face DR challenges and need to expose optimally. So far I have not come across a camera that have a histogram that gives you the full information (probably because those histograms would be less "user-friendly"), but even if not you can come pretty close to optimal exposure if you just know how it works.
 

jagsiva

Active member
Yes I've coded some highlight "recovery" algorithms, and what you do is not recover, you reconstruct ie guess values, make something that (hopefully) looks good. "Highlight recovery" is a misleading term, but it sounds better than "highlight guessing"
I am no imaging scientist, but from what I understand, it is not totally guessing.

When companies like Phase claim 13 stop DR, 2-3 stops are bunched near the top and 2-3 near the bottom, i.e, in the dark dark shadows and bright/specular highlights.

Monitors and printers cannot distinguish say between 251 and 255 or 0 and 5. When you move the sliders, these ranges get separated out. So the data is still there, but can be pulled into a visible range where we can see them. The RAW data is still available in the file to do this, i.e., the RAW converter is not "making up" new data.

I'm sure those far more educated on the topic can comment on this. From a user perspective, I have tried underexposing/overexposing and compared to highlight/shadow recovery and it does work as in the results are comparable within a reasonable range of 1-2 stops. Of course, noise and the related loss of detail from the noise becomes an issue quickly.

Again, just my experience, I could be wrong.
 

ondebanks

Member
While I agree with your overall point, I do think it still overlooks the quality-of-noise and quality-of-signal issue that I tried to lay out in detail.

Let me just ask directly: do you believe that with a given signal-to-noise ratio that all signal and all noise is equal?
This is what I was saying above. Doug and I are on the same page here.

Where we disagree somewhat is that I believe that RAW converters should be kept out of the assessment of camera sensors (or at least only introduced in a second, parallel stream of assessment which does not replace the primary one). RAW conversion software like C1 introduces a large degree of optimization-biasing towards particular designs, not to mention an impossibly huge parameter space. Not a level playing field.

Doug would say that it's the final photograph that matters, so who cares what the RAW converter did to achieve it? As he said above: "I don't care (other than abstractly) what the 1s and 0s of the raw file are; I care what can be extracted and used in a pleasing way in the raw processing software."

While that "the end justifies the means" type of argument is fine for comparing the aesthetics of photographs, it is invalid for comparing the engineering of camera sensors. We have to keep the two discussions separate.

But we also need more detailed and informative metrics which dig deeper into the available data, like photon transfer curve analysis, to be the norm when assessing the engineering.

When most people talk about camera sensor evaluations, they refer to DxOmark. Now I've long been astonished that DxO don't even touch on long exposure (dark noise) measurements. That too needs to be added to the engineering assessment mix.

Ray
 

Guy Mancuso

Administrator, Instructor
Ray I'm a total who cares how it is done in the raw its what is delivered that counts. Now I say that from a shooters seat not a scientist. All I really care about is what I can draw from the files in post. This is why I'm not the biggest fan of DXO ratings as it really dont tell us what happens in post. Now its a nice starting point and a decent way to compare things but people need to realize it is NOT the ultimate truth. It may say the DR on a certain back is 10 stops but a raw converter even at default levels it can be 12 lets say. As I usually say its part of the puzzle to determine ones sensor like anything else but its not the final word and given all the variables we may never have a final word. It's the age old saying I don't care how you got the shot as long as you got it. To me its all about what I can draw out of the file to a final state for printing or delivering to a client.

At least this is the way I look at digital. There really is no final but what you consider your best effort at the end of the chain. DXO is in the middle of that chain not the end

Now if you take the engineering seat than what I just said may not fit that criteria of evaluation. So from my seat a shooter the end justifies the means. But I do understand what you are saying but how much weight do we put on it is my question,
 

Guy Mancuso

Administrator, Instructor
Maybe a good analogy is does the end file look like what was on the LCD when you shot it. Interesting thought
 

torger

Active member
Okay "guessing" was a bit rough to say, but more to the truth than "recovery", because the data is clipped and you cannot really know what the value was. However, you can thanks to knowing things about how highlights usually behave make an educated guess, at least if you have one or two of the three channels left unclipped. Most raw converters don't take the guessing too far though, but instead only recover/reconstruct/guess as much you can do with high chance to make something look okay, and blend the rest towards the whitepoint. It would be pretty safe to guess that the center of the sun should be yellow, but few raw converters "recovers" that, but instead let it be white in the center.

It's quite hard to make 100% effective use of the top stop, as one channel typically clips one stop ahead of the others, depending on the color of the highlight and RGB sensitivities (G is often considerably more sensitive, but on my Aptus 75 the channels are actually quite well-balanced for daylight which make it a bit more efficient of utilizing it's dynamic range).

On the bottom stops there's no issue of clipping channels, there it's only about noise. The channel clipping issue really complicates highlight handling.

And of course, the raw converter always needs to blend towards something, typically the whitepoint. Raw conversion that doesn't guess any values of clipped channels will still modify the highlight information so you get a nice blend towards the whitepoint, typically by some sort of gradual desaturation.
 

Shashin

Well-known member
The only evaluation that make sense for a photographer is one they do themselves. There is no objective test for that. Buy the camera and test it and see if you like it.

Objective testing is very useful as it points to problems. It can never point to how good something is, at least at an aesthetic level. DxO Mark and DPreviews gives a comparison baseline. The problem is people don't know how to evaluate the information given.

But one thing I do not trust, and I do not mean any offense, is when a photographer takes a picture and then states a camera has x stops DR. That is guessing as they never do any measurements to even know what the DR of the scene is. And when you consider the luminance range of an average daylight scene is 160:1, most scenes don't exceed the DR of many cameras. What I get from that statement is the photographer is happy with the response and it has proven equal or better than perhaps his current gear. What I don't know is how that translates for the conditions I work under. I also have to factor in what the photographer's taste is as that will bias the test. So while I think personal reviews are good, they present more questions than they answer.

If DxO states a camera has a DR of 13.5 EV and my camera has 12.8 EV, I can probably imagine I am going to find DR OK for me. That does not mean it will work like my camera--I don't know if the DR was gained in the shadows or highlights. I don't know systemically what happens to the file. I am going to have to get the camera and learn how it sees--I never expect one camera to work like another. Cameras are like a box of chocolate...
 

torger

Active member
Relating the single-metric DxOmark "DR" to real usage: if the noise is nice and random, as on all(?) MFDBs and most non-Canon DSLRs you have something useful about 3 stops above. Ie 13 stop camera provides about 10 stops useful information. It's a bit a matter of taste though, but the engineering DR says that at the bottom stop signal is equal to noise, and that's, well, unusable. Going from saturation and down, any decent camera, even Canons, has fairly noise-free colors 7 stop down, ie if you push something in post within that range you should have no real problem regardless of what camera you use. Below 7 stops from saturation real differences between cameras start to show, both in terms of noise and loss of color accuracy.

However, the discussion about absolute values in stops is hard to do as there are no good commonly available tools for showing in the raw files how far from saturation a particular area in the file is. Lightroom, Capture 1 etc does so much pre-processing before the first slider is even touched that you cannot use those tools for correct file performance comparison. When I've done comparisons I've used the command line tool dcraw and own custom software to really look at the actual raw data what is happening before all conversions are made. When comparing two systems one also needs to know how to make a correct ETTR exposure, histograms can behave drastically different between cameras, so you may heavily underexpose one file if you assumes the histogram works the same as the other camera.

If we look at the best performers, ie D800 and IQ180, the differences between those cameras in terms of dynamic range is so small it should be irrelevant for any real usage. My digital back which uses an older generation sensor has noticably less DR, but still makes little difference concerning image making. I use grads in backlit scenes with my digital back, and I'd still use grads for a D800. To make a real practical difference I think we need to wait until we get those non-linear response sensors which compress highlights on sensor, meaning we can use longer exposure times and actually capture more photons in the shadows (which I do when using grads), when that is available I'll probably ditch the grads.

How much DR performance affects you also depends on what your post-processing style is, I like to have a contrasty "slide film" look of my landscape photography, and then you don't push the darks too much. If one however likes the painterly tonemapped look one will push shadows more and notice differences between cameras more.
 

Shashin

Well-known member
If we look at the best performers, ie D800 and IQ180, the differences between those cameras in terms of dynamic range is so small it should be irrelevant for any real usage.
But not for topics of conversations on photography forums... :lecture::talk028::argue::cussing::banghead:

:D
 
Top