The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Canon 1DX and implications for MF

Shashin

Well-known member
madmanchan, can you show that all n-bit cameras have the same DR? (I understand there is a limit to information, but you are suggesting DR is limited to bit depth.)
 

Shashin

Well-known member
So why can an Alpha 77 have a 13.2 EV DR and an E-P3 have a 10.1 EV DR? They are both 12-bit cameras.
 

Graham Mitchell

New member
Yes, Graham, but those aren't the actual values from the raw file ... ;)

As I mentioned earlier, a rendered image (like the example you posted) can indeed represent however many stops you want with whatever bit depth you want. However, the original raw data in a digital capture can't hold more stops than the bit depth of the sensor's ADC.
ok, but here is the quote which started this whole discussion: "Bit-depth refers to the number of luminance levels an image is binned into. It has nothing to do with dynamic range which is related to how much signal can a photo site absorb. So a 16-bit image does not automatically give you more dynamic range."

But I see where the confusion came from. Yes, you need a 16 bit raw file in theory to capture a scene with 15 stops of DR, but that point is moot as we don't have sensors with 15 stops of DR, so the 16 bit raw file is a waste, which was shashin's original point.

p.s. raw files are often not unadulterated ADC output, but that's another matter entirely ;)
 

Lars

Active member
But I see where the confusion came from. Yes, you need a 16 bit raw file in theory to capture a scene with 15 stops of DR, but that point is moot as we don't have sensors with 15 stops of DR, so the 16 bit raw file is a waste, which was shashin's original point.
Well put (although I read shashin's original point differently, perhaps mistakenly). And hopefully that's how camera manufacturers reason when they decide on bit depth of the processing pipeline. Although it baffles me a bit why in a computer it makes sense to cap A/D and calculation pipeline to 14 bits, almost all chips seem to work in bytes these days.
 
M

madmanchan

Guest
Hi Shashin, generally N-bit cameras have at most N stops of DR, but in practice less than N stops because of (read) noise. In other words, the ADC bit depth places an upper limit on the recordable scene dynamic range, but it's not the only limit. The noise floor of the sensor is the main other limit.

The recent Sony cameras like the A77 use a 12-bit compressed internal format to store the image values, but the actual linear raw data is 14 bits.
 

craigrudlin

New member
I fear I am getting a little lost in the bit -depth discussion. Is the gist
of the answer that current sensors can NOT generate sufficient tonal
range as to need 16 bit? Is it the consensus then that 16 bit AD is NOT
really needed (because it can't really be used) and hence a 14 bit
path is just as good (for now) as a 16 bit?

If so, then why (with the possible exception of the pentax 645D) are
the medium format cameras 16 bit and the 35 mm 14 bit? Is this
merely marketing?

Thanks for clarifying and educating!!

craig
 

Lars

Active member
This whole discussion about bit depth and DR of course assumes a linear sensor and linear processing pipeline. As far as I know, this holds for almost all digital cameras on the market. There are however other possibilities.

Howtec (and now Aztek) drumscanners have an A/D converter that can be loaded with a response curve, so effectively you can place the bins where you need them the best, for example use a gamma response curve to push more bins towards the shadows. At gamma 2, 14 bits would be sufficient in terms of DR even for more extreme HDR sensors.

Or perhaps we'll see floating point A/D converters in cameras eventually, which would make this discussion about bit depth a bit moot. For imaging purposes 16-bit "half" floats are quite sufficient.
 

Shashin

Well-known member
All I was trying to say was that adding 16-bit ADC was not suddenly going to let a camera capture anymore DR than the sensor is going to provide. 16-bit seems to be brought out as some kind of silver bullet that will do amazing things to expand the sensor signal.
 

Jack

Sr. Administrator
Staff member
All I was trying to say was that adding 16-bit ADC was not suddenly going to let a camera capture anymore DR than the sensor is going to provide. 16-bit seems to be brought out as some kind of silver bullet that will do amazing things to expand the sensor signal.
We never said that either. Eric explained it very well, and not wanting to beat a dead horse, but the basic thing to take away from this is that in addition to color fidelity, bit-depth IS related to total usable DR, and more is better; you cannot generate a true 17 stops of DR from an 8-bit file, the math will not allow it. What you can do is what Graham showed, PERCEPTUALLY render (compress) 17 stops in 8 bits, but if you read those values you will see they are not true full stops, but at at best rather 8/17ths or roughly 1/2 of a stop increments of light values.
 

Wayne Fox

Workshop Member
I fear I am getting a little lost in the bit -depth discussion. Is the gist
of the answer that current sensors can NOT generate sufficient tonal
range as to need 16 bit? Is it the consensus then that 16 bit AD is NOT
really needed (because it can't really be used) and hence a 14 bit
path is just as good (for now) as a 16 bit?

If so, then why (with the possible exception of the pentax 645D) are
the medium format cameras 16 bit and the 35 mm 14 bit? Is this
merely marketing?

Thanks for clarifying and educating!!

craig
I think most of the MF backs are actually 15 bit, not 16. But rather than compressing the data down to 14 bit, they expand to 16 bit since that's what most use the raw processing pipeline. As mentioned, bit depth of the resulting file isn't related to the bit depth sensitivity of the sensor.

Does it make a difference? As one who shoots several formats (NEX5, M9, 5D Mark2, IQ180) I do know I can pull more shadow details with less noise for the MF files than anything else, and I really don't feel I need to bracket unless the situation is extreme (such as sun in the image) with the MF. I have intentionally underexposed by 4 f-stops (shooting at ISO 35 but with the same settings I would use if I was at ISO 400) and have processed the resulting file to give identical results to one exposed at 400. I was doing this to research the idea that changing ISO in a Phase back doesn't really do anything to help overall quality, but was rather shocked to see files with literally not much there process out to be equal in quality to those that looked normal.
 

Thierry

New member
+1

"Bit-depth refers to the number of luminance levels an image is binned into. It has nothing to do with dynamic range which is related to how much signal can a photo site absorb. So a 16-bit image does not automatically give you more dynamic range."
All I was trying to say was that adding 16-bit ADC was not suddenly going to let a camera capture anymore DR than the sensor is going to provide. 16-bit seems to be brought out as some kind of silver bullet that will do amazing things to expand the sensor signal.
 

craigrudlin

New member
Some clarification, please...

So, if two sensors both have the same dynamic range, but one is "piped" into 14 bit and the other 16bit, the one that is 16 bit should allow a greater
tonal range and hence the image will appear to have more tonal range
and more of the "micro contrast" or 3-D appearance that characterizes
MF and larger format prints?
 

Jack

Sr. Administrator
Staff member
Some clarification, please...

So, if two sensors both have the same dynamic range, but one is "piped" into 14 bit and the other 16bit, the one that is 16 bit should allow a greater
tonal range and hence the image will appear to have more tonal range
and more of the "micro contrast" or 3-D appearance that characterizes
MF and larger format prints?
Not necessarily. If the sensor has a true DR of MORE than 14 stops, then yes.

The thing to try and understand is how a digital sensor renders an image. This is an oversimplification, but consider it two distinct parts: a digital sensor that can ONLY respond to Luminance values in a linear fashion, a Bayer filter to render Hue, and then the combined effects of both of those render Saturation, giving you basically an HSL color model to work with. So while bit-depth certainly is important to color fidelity, it also directly relates to luminance read values coming off that monochrome sensor. Here since it's linear, to be most efficient it has to use half of its bits to render it's brightest value, then one stop down is half of that, one more stop down, half of that again, so on. In 8-bit parlance the sensor needs 128 of its bit per channel to render the top value, one stop down it needs 64, one more down it needs 32, then 16, 8, 4, 2 and finally 1. Counting those values you get a maximum of 8 stops of accurate LINEAR DR rendered. Anything more shown in 8-bits has to be compressed, which may well generate a PERCEPTUALLY pleasing result, but not an accurate linear rendering of total DR. Again, once you understand how a digital sensor generates luminance separate from hue values, you can begin to understand the limitations bit-depth places on linear luminance readouts.
 
Top