As anyone who has ever tried to do direct head-to-head DR analysis knows it can be much more difficult then setting up a shot, shooting the first back, swapping backs, and shooting again. Variation in color spaces, processing choices, and different true native ISOs can all impact the head to head example. When you're looking for what you already know will be relatively minor differences (0.5 stop of DR at the most either direction) these otherwise trivial differences can really obscure your results.
Jack and Guy did some tests in Moab with the P65+ I brought and Guy's P25+ and Dave, our owner, did some tests in ATL. As best as I can tell the DR is a bit higher than either the P25+ or the P45+. Given that the size of the pixel has decreased I think this is an achievement in of itself.
There are actually a lot of fun to learn about (if you're a big nerd) factors in the architecture of a digital back which result in the system's DR. The quoted pixel size on tech sites is almost always the largest possible way of measuring each pixel. The part of each pixel which is actually light sensitive was, especially in the 9 micron sensors, much smaller. So when Kodak went from the P25+ to the P45+ the size of the light-sensitive area of each pixel barely changed, while the wasted (non-sensitive) area within each pixel (the electronics package and border of each pixel) was drastically reduced. Likewise with the new Dalsa 6 micron design the wasted space has been effectively eliminated so the loss of light sensitive is a little less than the 6.8 --> 6.0 micron change would indicate. Then after the sensor has pulled in all the photons it can the resulting electrical charge has to be read off of each pixel (generally down a row like "pass the bucket") and to a AD convertor where the charge is translated to a numerical value. The accuracy of this conversion is governed not only by the bit depth (e.g. 12, 14, 16) but also the quality of AD convertor (i.e. not all 16 bit convertors perform identically well). Depending on the architecture there is some (usually patented way of reading a black-calibration off of the unexposed chip and negating much of the noise of the chip). Once the signal is converted to digital you are left with a very unusable array of bayer-patterned pixels. It's then up to the math-gods of the raw conversion software to make a pretty picture from an otherwise random-looking group of pixels. The math here is so complicated that there are literally only a handful of individuals in the entire world who could truly be called experts on it. Finally you have a picture you can print or compare (and I've surely omitted several steps).
Don't know where this ramble came from now that I've arrived here. Other than to say a LOT of work goes into trying to squeeze out the last bit of quality from these sensors. And while the theoretical ceiling of DR may decline with the declining size of the pixel it is often the case that the actual performance stays the same or increases as the other mitigating factors that prevent you from reaching that ceiling are alleviated.
Sometimes I wonder if anybody but Bob makes it to the end of my poorly structured and inadequately explained technical rants. Hi Bob!
Doug Peterson, Head of Technical Services
Capture Integration, Phase One & Canon Dealer |
Personal Portfolio