I knew I'd take heat for this one... It's just what I've noticed, regardless of MF or DSLR or generations -- for example, a D800 has better usable DR than a P25+, but is pretty equivalent to a P45+. P/IQ 60's are better, 80's a little better still. And I'm talking usable or effective DR, not some lab's measured DR.
I hear this same nonsense in so many different disciplines.
This statement usually comes from people whose personal methods differ so significantly from standard practice that they're unable to duplicate lab results, so they claim the lab is wrong.
[...]
How, pray tell, can laboratory measurements be somehow inaccurate, misleading, or wrong?
In fairness to Jack, I understand what he's getting at. Even though I will always advocate the scientific method and proper testing. And graphs are always welcome :salute:
Dynamic range is a crude estimator of performance. It only tells you something about the sensor at two intensity points, the extremes of its operational range. It says nothing about what goes on in between - how much signal, how much noise, the relative contributions of different types of noise, the wavelength selectivity of the signal. It's one of those "never mind the quality - feel the width!" metrics.
One needs to plot the full noise model of the sensor to get a more complete picture, and even that is not everything: it should be repeated at different exposure times, temperatures, and ISO settings.
If I walk into an all-you-can-eat buffet with an empty stomach from fasting, and a determination to fill it to bursting point
, those are the two endpoints of my stomach's "dynamic range". How I progress from empty to full can take many paths at the buffet; multiple bowls of porridge would do the trick; as would a fine-dining banquet of Michelin-starred delicacies. I know which path I would pick!
+1. Everyone likes to quote the numbers, but few understand the significance. And like Ray, I also believe in standardized testing. But that only takes you so far, especially with applied photography for creative images.
A lab test is only as useful as the criterium it's measuring. I assume, based on their clean technical writing that they are correctly implementing a reliable test of dynamic range based on the classic electro-optical definition.
Unfortunately for photographers this definition only loosely correlates to dynamic range as defined by "how much shadow-to-highlight scene range can I capture in a way that will be pretty/natural/aesthetic"
Two camera systems can have identical dynamic ranges as determined by algorithm but have very different dynamic range regarding how much of the scenes highlights and shadows can be pleasantly reproduced in a final print.
Examples:
-
difference in character of noise (gaussian, uniform, clumpy, color or monochromatic) which makes the noise more or less pleasant for a given person's preference ("film like" vs "digital/artifacty" noise can be two descriptions ascribed to two images which have - technically - the same amount of noise as numerically measured)
-
linearity of color (do the shadows bias towards a certain color; do all colors respond similarity as they fall into shadows)
-
tonal smoothness (is there any feeling of posterization or other abrupt transitions or do the transitions from deep shadow to quarter tone smooth and pleasing?)
-
roll off into highlights, are near-blown tones rendered as a smooth decay into no-information zones or do they create strange color/tone artifacts
It's not dissimilar to saying a rock concert, a baby screaming, and the engine-roar can all have similar absolute loudness in decibels, but I think we'd all agree they differ in their pleasantness to listen to.
Moreover DXO and other metrics I've seen published (like the spec sheet from the sensor manufacturers) only tell you the range of raw data during the primary capture without post processing. The application of the dark frame (the loose equivalent of the iPhone 5's ability to use a second microphone to listen for ambient noise to increase the clarity of the signal) and the highly-catered debayering and detail-extraction and characteristic-noise suppression/shaping of the combination of a Phase One or Leaf raw file and Capture One is not taken into account. Nor are the cross-effects of non-blown-channel reconstruction can do to subject matter which is blown in one channel but not another (see also: many a blue sky) for which linearity of color and purity of color response (dependent on, amongst other factors, the spectral transmission characteristics of the bayer pattern used) helps/hurts various cameras. I don't care (other than abstractly) what the 1s and 0s of the raw file are; I care what can be extracted and used in a pleasing way in the raw processing software. DXO would claim that a IQ180 has the same dynamic range whether you process it in C1v6 or C1v7 and they wouldn't be wrong in the strict sense (the back did not, in fact, change it's response) but their answer would not be relevant to someone taking pictures and processing in both C1v6 and C1v7 (the user of v7 would find they could consistently use parts of the scene further into it's highlights and shadows).
Finally DXO tends to measure backs/cameras when they are first released (not always the case, sometimes the test comes years after release). And anyone who has owned a P1 back or Leaf Credo back from the first day of release (my specific area of greatest experience, this may be true of other backs) knows that the noise/dynamic-range has improved as Team Phase One continues to develop and improve the firmware that controls the sensor exposure, readout, and dark frame routines. This is not a big deal, but another example how the question they are answering is not necessarily the question a photographer is asking.
My former life was as a programmer for a data analysis suite for lab replication and analysis of field vibration measurements correlated to acoustic recordings in the automative industry for the purpose of improving the experience of a driver/passenger vis a vis strange squeaks and rattles experienced on given road surfaces. So lab measurements and the mentality of variable isolation, numeric representations of real world phenomenon, and the scientific method are not foreign to me. But even in that job, all of our effort was to identify potential problematic areas/scenarios/conditions - the final analysis was always to put a person in an actual car, replicate the appropriate conditions, and then ask them "how annoying is that squeak from 1-10" or "is squeak A or squeak B more annoying"? In any number of fields quantified lab measurements are of immense value but they are very rarely the entire picture (pun intended).
The story is rarely as simple as a few numbers
.
This is one of the primary reasons why we emphasize real-world evaluation (rentals, demos, raw file catalog) so heavily. If someone wants to know how much dynamic range a particular back has my first instinct is always to put said back in their hand and tell them to go shoot the pictures they normally would and see how the camera/files handle. Scientific? Not really, but in my experience it gives the customer the best understanding of what they should expect from the system once-purchased.