The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

The 16-Bit Fallacy

cunim

Well-known member
Good observation about frame averaging. Indeed, it is the easiest way to get more precision out of a camera but, on its own, it is not enough. You have to specify over what area the precision value applies. It is much easier to achieve 16 bit precision with 100 pixel areas than with single pixels and I have no idea what it means when a camera manufacturer tells me they do 16 bits. Digitization precision? Averaged over large numbers of pixels? It is certainly not on the single pixel level.

To put it in context, our quantitative imaging systems would be specified at 8 - 10 bits with fast readout uncooled cameras, because that is what we could measure reliably in four pixel blocks. With Peltier cooling or frame averaging, we could go to 10-12 bit precision, depending on how cool and how many frames. Getting beyond that took some serious hardware. To get practical 16 bits out of a camera takes slow readout and cryogenic cooling. Cooling with LNO2 and reading out really slow can give you 16 bit precision at single pixel levels, but only if you totally mask off the area being read. Even with self-luminescent targets, flare becomes a major factor limiting the ultimate precision of an imaging system once you get to high precisions. For example, we could get 16 bits with bioluminescent targets (no background) but not with fluorescence (background from excitation) or if there were multiple bright targets in the FOV

What does this all mean for your "16 bit" P1? I dunno. I suspect it is the theoretical precision of the digitization electronics but maybe someone more knowledgeable could chime in.
 

MGrayson

Subscriber and Workshop Member
I looked at the last two bits of a 16-bit image a they were FAR from random - you could clearly see the different regions in the final image. This is a very bad sign, as they were so correlated with higher order bits that they contained no NEW information. (This from a Hasselblad X2D - I can't speak for other systems.) Imagine you were measuring the distance to a satelite at VERY high precision. If the twelfth digit of your measurements was highly correlated to the third digit, you would be very suspicious. That's in effect what we're seeing here.

Here's a crop of the last two bits. Nothing was clipped or zeroed in the original. This is pure RAW, hence the Bayer pattern is visible.


So I believe Mr. Kasson without further data.

Matt
 
Last edited:

SrMphoto

Well-known member
Good observation about frame averaging. Indeed, it is the easiest way to get more precision out of a camera but, on its own, it is not enough. You have to specify over what area the precision value applies. It is much easier to achieve 16 bit precision with 100 pixel areas than with single pixels and I have no idea what it means when a camera manufacturer tells me they do 16 bits. Digitization precision? Averaged over large numbers of pixels? It is certainly not on the single pixel level.
16 vs. 14 bits mainly refers to how data is read from the sensor. The GFX/X2D sensor allows readouts in 16, 14, and 12 bits, at least.
It is possible that IQ4 also uses 16 vs. 14 bits internally for assembling frames, hence the IQ issues with 14 bits.
Frame averaging is not about increasing precision but about averaging out the read noise

To put it in context, our quantitative imaging systems would be specified at 8 - 10 bits with fast readout uncooled cameras, because that is what we could measure reliably in four pixel blocks. With Peltier cooling or frame averaging, we could go to 10-12 bit precision, depending on how cool and how many frames. Getting beyond that took some serious hardware. To get practical 16 bits out of a camera takes slow readout and cryogenic cooling. Cooling with LNO2 and reading out really slow can give you 16 bit precision at single pixel levels, but only if you totally mask off the area being read. Even with self-luminescent targets, flare becomes a major factor limiting the ultimate precision of an imaging system once you get to high precisions. For example, we could get 16 bits with bioluminescent targets (no background) but not with fluorescence (background from excitation) or if there were multiple bright targets in the FOV

What does this all mean for your "16 bit" P1? I dunno. I suspect it is the theoretical precision of the digitization electronics but maybe someone more knowledgeable could chime in.
 

SrMphoto

Well-known member
I looked at the last two bits of a 16-bit image a they were FAR from random - you could clearly see the different regions in the final image. This is a very bad sign, as they were so correlated with higher order bits that they contained no NEW information. (This from a Hasselblad X2D - I can't speak for other systems.) Imagine you were measuring the distance to a satelite at VERY high precision. If the twelfth digit of your measurements only took the values of 3 and 7, you would be very suspicious. That's in effect what we're seeing here.
Hasselblad generates 16-bit output both in 14-bit and 16-bit readout modes. Which mode did you use?

Here's a crop. Nothing was clipped or zeroed in the original. This is pure RAW, hence the Bayer pattern is visible.
<snip>

So I believe Mr. Kasson without further data.

Matt
 

MGrayson

Subscriber and Workshop Member
Hasselblad generates 16-bit output both in 14-bit and 16-bit readout modes. Which mode did you use?
I tried it in both, it made little difference! This is from a 16-bit everything file. But even if they packed a 14-bit image into a 16-bit file, there would be no reason to use higher bits to fill in the last two. Random would be best.
 

SrMphoto

Well-known member
I tried it in both, it made little difference! This is from a 16-bit everything file. But even if they packed a 14-bit image into a 16-bit file, there would be no reason to use higher bits to fill in the last two. Random would be best.
Like with X1D, in 14-bit readout, Hasselblad computes the lowest two bits in the camera. That means they use some algorithms that may sometimes put the higher bits into the lower bits. Are you saying the data from the 14 and 16-bit readouts is the same?
Have you checked whether this is for all raw data in 16-bit mode or only for some?
 

MGrayson

Subscriber and Workshop Member
Like with X1D, in 14-bit readout, Hasselblad computes the lowest two bits in the camera. That means they use some algorithms that may sometimes put the higher bits into the lower bits. Are you saying the data from the 14 and 16-bit readouts is the same?
Have you checked whether this is for all raw data in 16-bit mode or only for some?
I am not privy to Hasselblad's algorithms. I took a 16 bit file recorded in 16 bit mode and examined different bit slices. Some of the results were quite psychedelic! Beyond that, I did no measurements of Shannon information or anything more sophisticated. This is not difficult to do and I recommend that anyone interested try it themselves. Or ignore me and read Jim Kasson. He is careful and reliable.
 

cunim

Well-known member
@MGrayson, I think there is a discontinuity between the way imaging people (like myself) or photographers view bit depth. To me, bit depth is equivalent to SN, and saying you have 16 bits is equivalent to having sufficient SNR to discriminate between 65,535 and 65,536 gray levels. I am not sure what it means in photography, though I am sure that, as photographers, we don't meet the imaging criteria.

However, what I just said is not really true at the measurement extremes. There, precision is truncated. If you measure level 65,000 you get a normally distributed (you hope) population of values and precision is a factor of the variance of that distribution. However, if you measure 65,530, you still get a distribution but this one is truncated on top so variance is reduced and your SNR seems to be better. Same thing happens at the bottom. It looks like your detector has lower noise (less variance because of truncation) that it actually has. I have seen perfectly good scientists make erroneous conclusions based on this anomaly.

You can try to recover some of the truncated data. What we used to do is to characterize the distribution of pixel values just before truncation starts to set in, and apply that distribution to the truncated data. Yes it is statistical cheating, but it can save your butt if you've let your data get too light or too dark. I seem to remember we patented the method but I could be wrong. It works better (yields more accurate real world measurements) than just dithering values around the truncation point.

No idea why I am getting into all this here, except I wonder if Hasselblad are trying to do something like that? Truncate the final two bits of values and then regenerate them as estimates based on the more reliable data? It wouldn't be real, but it might be decorative.
 

SrMphoto

Well-known member
In all the photographs I have taken, the limits have not been with bit depth.

That's my two bits, anyway...
That is true for most of us.
Still, it is good to understand whether there are any benefits to shooting with 16 bits, as the disadvantages are clear (longer blackout, more rolling shutter, larger files).
 

MGrayson

Subscriber and Workshop Member
Yeah. Adding a new row to the table (there were more, but I can't remember them)

Probability of a Picture Being Ruined By
Insufficient Bit Depth - <0.01%
Insufficient MP - 1%
Dead Battery - 3%
DoF limitation - 5%
No IBIS - 10%
No Tripod - 15%
AF failure - 20%
MF failure - 30%
Other User Error - 40%
Lack of Imagination - 75%
Sitting at Computer - 100%

(Yes, "Left Camera at Home - 100%" should also be there, but it ruins the joke.")
 
Last edited:

buildbot

Well-known member
What we used to do is to characterize the distribution of pixel values just before truncation starts to set in, and apply that distribution to the truncated data.
This is very clever in my opinion -
In some ways, what machine learning is doing today is just matching an output distribution to an input distribution! (You could probably extend 14 bits to 16 using an image generation model, would it be better? IDK).
 

MGrayson

Subscriber and Workshop Member
@MGrayson, I think there is a discontinuity between the way imaging people (like myself) or photographers view bit depth. To me, bit depth is equivalent to SN, and saying you have 16 bits is equivalent to having sufficient SNR to discriminate between 65,535 and 65,536 gray levels. I am not sure what it means in photography, though I am sure that, as photographers, we don't meet the imaging criteria.

However, what I just said is not really true at the measurement extremes. There, precision is truncated. If you measure level 65,000 you get a normally distributed (you hope) population of values and precision is a factor of the variance of that distribution. However, if you measure 65,530, you still get a distribution but this one is truncated on top so variance is reduced and your SNR seems to be better. Same thing happens at the bottom. It looks like your detector has lower noise (less variance because of truncation) that it actually has. I have seen perfectly good scientists make erroneous conclusions based on this anomaly.

You can try to recover some of the truncated data. What we used to do is to characterize the distribution of pixel values just before truncation starts to set in, and apply that distribution to the truncated data. Yes it is statistical cheating, but it can save your butt if you've let your data get too light or too dark. I seem to remember we patented the method but I could be wrong. It works better (yields more accurate real world measurements) than just dithering values around the truncation point.

No idea why I am getting into all this here, except I wonder if Hasselblad are trying to do something like that? Truncate the final two bits of values and then regenerate them as estimates based on the more reliable data? It wouldn't be real, but it might be decorative.
I like that! I wonder if something similar is used to try to repair clipping in digital audio. I’m so happy that 32-bit floating point audio is now a thing. FP pixels would be great, but audio only needs one such A/D converter per track working a few hundred thousand times per second. Our needs are slower, but we need it once per pixel. 😳
 

cunim

Well-known member
I like that! I wonder if something similar is used to try to repair clipping in digital audio.
Good point, and not just clipping. Many years ago I bought one of the first dCS Purcell units - an oversampler for digital audio. 16 in, 24 out. From one point of view, that thing did nothing at all. Bits is bits. To my ear, however, it sounded a bit better than straight 16 bit audio. Now, all the cool audio DACs have oversampling and even allow us to select which oversampling method is used. Sadly, my aged ears can't hear such subtle differences any more.

Similarly, it seems as if what Hasselblad is doing is oversampling the 14 bit data. I am sure one of the engineers among us could explain how it is done - and how P1 differs - but my head already hurts from following this.

Sometimes, it's best not to overthink.

turkeys.jpg
 
Last edited:

anwarp

Well-known member
Now, I’m curious about the difference between 16 bit and 16 bit extended modes on the IQ4 150. 1.1 fps vs 0.7 fps.
Based on the DR charts for this DB at photonstophotos, 14 bits should be enough!
 
Top