@MGrayson, I think there is a discontinuity between the way imaging people (like myself) or photographers view bit depth. To me, bit depth is equivalent to SN, and saying you have 16 bits is equivalent to having sufficient SNR to discriminate between 65,535 and 65,536 gray levels. I am not sure what it means in photography, though I am sure that, as photographers, we don't meet the imaging criteria.
However, what I just said is not really true at the measurement extremes. There, precision is truncated. If you measure level 65,000 you get a normally distributed (you hope) population of values and precision is a factor of the variance of that distribution. However, if you measure 65,530, you still get a distribution but this one is truncated on top so variance is reduced and your SNR seems to be better. Same thing happens at the bottom. It looks like your detector has lower noise (less variance because of truncation) that it actually has. I have seen perfectly good scientists make erroneous conclusions based on this anomaly.
You can try to recover some of the truncated data. What we used to do is to characterize the distribution of pixel values just before truncation starts to set in, and apply that distribution to the truncated data. Yes it is statistical cheating, but it can save your butt if you've let your data get too light or too dark. I seem to remember we patented the method but I could be wrong. It works better (yields more accurate real world measurements) than just dithering values around the truncation point.
No idea why I am getting into all this here, except I wonder if Hasselblad are trying to do something like that? Truncate the final two bits of values and then regenerate them as estimates based on the more reliable data? It wouldn't be real, but it might be decorative.