The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Canon 1DX and implications for MF

pophoto

New member
The sensor isn't the issue. A 60MP sensor in a full frame 35mm camera would be usable up to about f5.6 before it is diffraction limited. And most lenses can't resolve the necessary 130 line pairs per mm anyway. Not even close. So tell me again why this makes sense.
The part that makes sense, was to do with what YOU said in your previous posting about Canon not operating like this :)
 

lowep

Member
Am I right in guessing this announcement is not bad news for MFDB manufacturers, dealers and traders, as Canon has politely kept its big paws out of their honey pot.
 

yaya

Active member
I wonder if this is not just a 1-2 yr product...filling a gap before a new body design comes out...
 

KeithL

Well-known member
Am I right in guessing this announcement is not bad news for MFDB manufacturers, dealers and traders, as Canon has politely kept its big paws out of their honey pot.
Would Canon's big paws fit such a small pot?
 

Jack

Sr. Administrator
Staff member
Actually Shashin was correct. You can have a 16-bit image with only 1 stop of DR, or an 8 bit image with 16 stops of DR. The DR of the sensor is independent of the number of bits used.
Absolutely.

Thierry
Actually, Lars is correct -- You can have 1 stop of DR displayed in 16-bit, but you really only need one of the bits to do it; and you very definitely cannot have 16-stops of DR displayed in only 8 bits of data per channel, the math won't allow it. Bit-depth IS directly related to rendered image DR. It is an often misunderstood concept, but it is all in the math.
 

Graham Mitchell

New member
Actually, Lars is correct -- You can have 1 stop of DR displayed in 16-bit, but you really only need one of the bits to do it; and you very definitely cannot have 16-stops of DR displayed in only 8 bits of data per channel, the math won't allow it. Bit-depth IS directly related to rendered image DR. It is an often misunderstood concept, but it is all in the math.
Sorry, Jack, you're wrong. Remember that the DR is about the difference in light in the *scene*, not the file itself. You can capture a scene with 12 stops of DR, and save the file as 8-bit, and the scene will still be representing 12 stops.
 

Shashin

Well-known member
Bit depth is the the coding of the sensor response, as Graham is saying. No matter the DR, the bit depth simply bins (divides) the luminance levels into the number of levels it represents--it is simply part of the conversion of the analog signal into a digital one. Adding more bits dos not extend DR, it just divides it into small luminance levels/steps.

The advantage in bit depth is in processing where it can help prevent things like banding. The reason 8-bit is the minimum in imaging is that to give the illusion of a seamless gradient going from black to white, you need approximately 200 levels of gray. 8-bit is 256. But if your histogram has only one third the information over its range, when you expand that data, banding will start to appear--this can be prevented with images with higher bit-depths. This also helpful in areas of the image like the highlights and shadows when you want to expand the data to show detail--highlight recovery would be an example. (But RAW processing cannot change the DR of an image--the sensor is responsible for that and has already encoded the information in the file before you can process it.)

Photshop et al. makes this even more confusing as all the numbers and preview images are 8-bit--open a 16-bit image in levels and the white point is at 255, not 65,535. Make a really large change to the image and it will band in the preview, but smooth out after the correction is applied.
 

Shashin

Well-known member
60MP in a 35mm sensor. Hmm... You do know that photography is light dependent and not plagued by it. Pixels actually having a surface area is important. It is all about catching photons. ;)
 
M

madmanchan

Guest
Don't want to get too sidetracked here ... but Jack and Lars are correct about the capture dynamic range of a digital sensor.

Yes, it is true that from a representation / storage point of view, the dynamic range and bit depth are independent. The former is the contrast (height of stair case) and the latter is the # of steps.

However ...

A digital sensor is a linear capture device, so the bit depth is therefore directly related to the maximum possible dynamic range that can be captured. Remember, a linear device means that 2x the amount of light (# of photons) captured translates to 2x the digital image value. For example, a sensor with a 12-bit ADC represents the brightest recordable pixel value at 4095. One stop below that (half the light) is ~2048. Two stops below white (1/4 of the light) is ~1024, then ~512 for 3 stops, then ~256 for 4 stops, etc. all the way till we reach the minimum representable value (the integer 1), which is 12 stops below the maximum value of 4095. So, there is no way to capture a scene of, say, 14 stops, because 14 stops below 4095 is ~0.25 (less than 1).

What many folks here are thinking of is the output dynamic range of an image (rather than the input or capture dynamic range). Certainly it is true that one can use tone curves, local dodging/burning, gamma encoding, etc. to distribute tonal values however you wish, and use however many bits of precision you want to store the results. For example, you can choose to tone-map an image into 1 stop (really low contrast!) and use 16 bits to represent it. But this is a completely separate matter from the capture dynamic range (i.e., what the sensor is capable of holding).
 

Shashin

Well-known member
A digital sensor is a linear capture device, so the bit depth is therefore directly related to the maximum possible dynamic range that can be captured. Remember, a linear device means that 2x the amount of light (# of photons) captured translates to 2x the digital image value. For example, a sensor with a 12-bit ADC can represent the brightest pixel value at 4095. One stop below that (half the light) is ~2048. Two stops below white (1/4 of the light) is ~1024, then ~512 for 3 stops, then ~256 for 4 stops, etc. all the way till we reach the minimum representable value (the integer 1), which is 12 stops below the maximum value of 4095. So, there is no way to capture a scene of, say, 14 stops, because 14 stops below 4095 is ~0.25 (less than 1).
???? Now who is confusing the file with the signal?

If DR has a range of 14 stops, I can divide/bin that into any bit depth. The top number would be the peak signal and regardless of bit depth, the peak signal is the same, just a different value assigned to it. Only contrast index, for want of a better term, determines the actual difference in exposure value between two levels. To say the difference between levels 2048 and 4095 is one stop would be completely wrong or at least unknown until you can do some calculations. The sensor response may be linear, but the length is not fixed and bit-depth can be distributed along it no matter the length.
 
Last edited:
M

madmanchan

Guest
If DR has a range of 14 stops, I can divide/bin that into any bit depth.
Yes, you can, but the sensor cannot. ;)

To say the difference between levels 2048 and 4095 is one stop would be completely wrong or at least unknown until you can do some calculations.
It turns out I do these calculations for a living. :)

Suppose I take a picture of an object and it shows up in the original (linear Bayer mosaic) raw data as 4095. If I re-take the picture with half the exposure (e.g., close down 1 f-stop, or half the exposure time), then the pixel values of that object in the 2nd image will be 2048 -- i.e., half. You can replace these example numbers with any specific values you wish. The point is that it's a direct linear relationship in the original raw capture data. So, 2x the exposure means 2x the raw image pixel values ... half the exposure means half the raw pixel values. I have checked this property for many, many cameras (from compacts to MFDBs).
 

Shashin

Well-known member
Yes, you can, but the sensor cannot. ;)



It turns out I do these calculations for a living. :)

Suppose I take a picture of an object and it shows up in the original (linear Bayer mosaic) raw data as 4095. If I re-take the picture with half the exposure (e.g., close down 1 f-stop, or half the exposure time), then the pixel values of that object in the 2nd image will be 2048 -- i.e., half. You can replace these example numbers with any specific values you wish. The point is that it's a direct linear relationship in the original raw capture data. So, 2x the exposure means 2x the raw image pixel values ... half the exposure means half the raw pixel values. I have checked this property for many, many cameras (from compacts to MFDBs).
Well, naturally the file will change those values by a factor of two. You have changed the exposure by a factor of two. But what we are talking about is the scene luminance range and the DR of the sensor. What you cannot tell me is if the luminance difference in the scene is actually a factor of two.

I do scientific imaging and use all kinds of cameras. It is not a simple thing to get subject luminance values from images.
 
M

madmanchan

Guest
I was using the in-camera exposure setting as an example.

But the same holds true for natural scene luminance range, and is actually pretty easy to check using a spot meter (I use a telespectroradiometer such as the Photo Research devices, but simpler devices will work too). Just measure the radiance at a spot and take a picture with the camera. Do the same thing for a darker (or brighter) spot. Compare the ratio of the radiances with the ratio of the recorded digital raw values. They will be the same.
 

Shashin

Well-known member
Well, the ratio will stay the same. Afterall, the slope is linear (but the angle unknown). However, the signal can still be binned into any bit-depth for any DR and the tests you do will show the same relative changes in your tests (but it does not indicate actual luminance levels)--I can assign 4095 to the peak signal just as I can 16,535. Changing the bit depth will not change the DR of any particular sensor--signal determines DR. Bit depth comes from ADC, it just bins the analog signal into the levels. A signal has no bits, it is simply the electrical response from the photosite. The ADC need to convert that into numbers and the bit depth is simply the scale it uses, but my coffee cup is not taller in millimeters than it is in centimeters because it has more of them.
 
M

madmanchan

Guest
Shashin, I am not saying changing the bit depth of the ADC fundamentally changes the photon capturing properties of the sensor. I am saying that the bit depth of the ADC places a fundamental limit on the linear dynamic range of the scene that can be captured. If a scene contains a very bright area, I can choose my in-camera exposure such that, after conversion to a digital value, it will map to the maximum representable value (e.g., 4095 for a 12-bit system). If that same scene contains a dark area -- say, 8 stops darker than the bright area -- then in the same picture it will map to a digital value that is 8 stops lower (e.g., ~16 for a 12-bit system). If that same scene contains an even darker area -- say, 15 stops darker than the bright area, then it cannot be represented.

The sensor ADC does not simply "bin" the analog signal into the digital levels. It is not free to scale the analog signal into however many bins it wants. There is a direct linear relationship between the magnitude of the analog signal and the resulting output digital level. If you have 2x the radiance coming from your scene, you'll have 2x the # of photons captured by the pixel, 2x the magnitude of the analog signal, and 2x the resulting digital level.

If you don't believe me, you can easily verify this for yourself by taking spot readings of the radiance levels in the scene (like I mentioned earlier, telespectroradiometers are great for this type of work). Then take a picture in raw mode and study the digital raw levels (you can use dcraw or other software tools for this). Thus you can establish the relationship between the absolute radiance of a spot in the scene and the corresponding raw level, which you'll find to be linear ...
 
M

madmanchan

Guest
Yes, Graham, but those aren't the actual values from the raw file ... ;)

As I mentioned earlier, a rendered image (like the example you posted) can indeed represent however many stops you want with whatever bit depth you want. However, the original raw data in a digital capture can't hold more stops than the bit depth of the sensor's ADC.
 
Top