The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Shot noise question

Abstraction

Well-known member
Let's assume that Phase One Iq 4150 uses the same sensor as Fuji GFX100, Nikon Z7 and whatever the latest Sony APS-C camera is... let's call it Sony X.

If I were to tape the Iq4 150 back so that only a 33x44 area was exposed, would my noise levels equal the GFX 100? If I were to leave a 24x36mm area exposed and tape the rest, would my performance be exactly as Nikon Z7 and the Sony X if I were to leave a 17x24 area exposed?
 

Shashin

Well-known member
If the pixels have the same performance, then simply cropping the senor would result in equal noise and DR, but different resolution. The question would how does resolution affect the perception of that image? Is it significant that a viewer would notice under normal viewing conditions?
 

Abstraction

Well-known member
If the pixels have the same performance, then simply cropping the senor would result in equal noise and DR, but different resolution. The question would how does resolution affect the perception of that image? Is it significant that a viewer would notice under normal viewing conditions?
I would think so as well, but the concept of "shot noise" and the way I have understood it threw me off. If what you're saying is correct, then there is no such thing as "shot noise"
 

pegelli

Well-known member
My idea would be that cropping of the sensor (or taping as you call it) has exactly the same effect on the shot as printing the full IQ4150 image on a large paper and cutting off the sides to match the view of the cropped (taped) sensor.

However when you print the full and cropped (taped) results all on the same size paper the noise will be more noticable when a smaller area of the sensor is used.
 

Christopher

Active member
There is no camera so far using this technology beides phase one and Fuji. No 36mm FF camera uses is. They are all behind. The same pixel tech would result in around 60Mp.

In general I would guess it should be like that. However, there is more to it. As we. And see Phase One gets more DR from the same tech as Fuji.
 

dougpeterson

Workshop Member
This question assumes that a sensor and a camera are the same thing.

A sensor is a component. A camera is a chain of components and engineering decisions: [Lens coating > Lens elements/design > Aperture blade design > internal body coating > microlens size/shape > Anti aliasing filter > IR filter thickness, rolloff and cutoff characteristics > CFA design (see also p1 trichromatic) > sensor photo well size/design > heat-sinking and/or active cooling > A/D converter type/quality > A/D converter bit-depth and control parameters > frame averaging (if applicable) > black calibration recoording > debayering algorithm > color profile intention > color profile quality > deconvolution / detail finding algorithm > noise reduction based on black calibration file > noise reduction based on image data > sharpening]

The underlying tech/generation of sensor is certainly a big driver in dynamic range (i.e. noise), probably the biggest. But it is very far from the only driver.

In my highly biased opinion, Phase One has consistently gotten more out of every generation of sensor. Not necessarily because they are better engineers than at other camera companies, but because they prioritize image quality over all else. For example, if adding more heat sinking to a system will decrease noise by a modest amount, P1 will choose the increased image quality. If reading the image out slower will improve noise but decrease the speed of continuous shooting, P1 will choose the increased image quality. Other companies may make other choices when faced with such tradeoffs, which they'd be right to do so because of the priorities of their intended users. Note I am absolutely NOT saying other companies make cameras with low or even mediocre image quality; for the most part I'm distinguishing between modest differences, maybe best said as the difference between "very good" and "extraordinary". Most people (doing most kinds of photography for most kinds of reasons) are better off with a smaller/lighter/cheaper/faster camera that has "very good" image quality while Phase One focuses on the people who are okay with a larger/heavier/more-expensive/slower camera that has "extraordinary" image quality.
 

Leigh

New member
Shot noise is a transient event, generally of very short duration.
It gets attenuated by the number of images being averaged.

For example, if you take two images in very rapid succession and average them,
shot noise may appear at a particular position in one image but not in the other.
When you average them, the intensity of the noise is cut in half.

This has nothing to do with pressing the shutter release repeatedly. It's done by
the camera electronics for a single shutter release.

Noise is reduced in a digital camera by averaging many images.

- Leigh
 

Abstraction

Well-known member
My question was the result of this article

https://www.dpreview.com/articles/8...e-shedding-some-light-on-the-sources-of-noise

As I understand it, all else being equal, the smaller sensors cuts of the same sensor should inherently produce more noise. Therefore, if this is true, then the logic suggests that if we physically tape parts of a Phase iq4 150 back (I'm using it as an example because it's the largest consumer sensor on the market and it is the latest generation sensor), then we should start seeing progressively more and more noise.

Is that the case? If it's not the case, then the whole article ìs just complete hogwash.

This is what I'm trying to wrap my head around.
 

pegelli

Well-known member
My question was the result of this article

https://www.dpreview.com/articles/8...e-shedding-some-light-on-the-sources-of-noise

As I understand it, all else being equal, the smaller sensors cuts of the same sensor should inherently produce more noise. Therefore, if this is true, then the logic suggests that if we physically tape parts of a Phase iq4 150 back (I'm using it as an example because it's the largest consumer sensor on the market and it is the latest generation sensor), then we should start seeing progressively more and more noise.

Is that the case? If it's not the case, then the whole article ìs just complete hogwash.

This is what I'm trying to wrap my head around.
The article isn't clear, but in my mind it all depends on the output size. I think if you just cut off and use the same magnification (so a smaller output size) you get equal noise (assuming same light/same sensor/same camera pipeline/.....). If you blow up the smaller image to the same output size you start seeing increased noise. That's the trouble with all these "equivalence" discussions, if you're not clear on what's constant and what's being varied any statement made is true. However I didn't study the whole article in detail so YMMV.
 

JimKasson

Well-known member
I would think so as well, but the concept of "shot noise" and the way I have understood it threw me off. If what you're saying is correct, then there is no such thing as "shot noise"
In your mind, is there such a thing as photon noise, with a Poisson distribution? When people talk about shot noise in sensors, that's usually what they are referring to.

Jim
 

JimKasson

Well-known member
Shot noise is a transient event, generally of very short duration.
It gets attenuated by the number of images being averaged.

For example, if you take two images in very rapid succession and average them,
shot noise may appear at a particular position in one image but not in the other.
When you average them, the intensity of the noise is cut in half.

This has nothing to do with pressing the shutter release repeatedly. It's done by
the camera electronics for a single shutter release.

Noise is reduced in a digital camera by averaging many images.
It's not that shot noise may appear in one pixel in one exposure and not in others. The concept of shot noise -- or, more commonly, photon noise -- is that the quantized electron count at each pixel location has Poisson statistics. You can't say anything about shot noise at a pixel by looking at a single sample. It is common when analysing sensor performance for shot noise to treat the pixels as identical (after correction for pattern errors such as FPNU), so that statistics for each pixel can be obtained by proxy in a single exposure or a pair of such shots.

Jim
 

Leigh

New member
My question was the result of this article
I only read up throuh his discussion of signal-to-noise ratio (SNR).

SNR is a well-known phenomenon, commonly used in communications systems.
It's a measure of signal quality, as over a telephone line, radio, or data system.

Of significance is that it's averaged over time, usually several seconds, not
instantaneous. It certainly is not measured over the duration of a shot noise
event, which is a millionth of a second or a millionth of a millionth of a second.

Shot noise is random; think lightning. Lightning is the main source of "shot"
noise in a radio, as you know by listening to any AM station.

The chance of a lightning strike in your yard is very small (assuming it's flat with no
trees or similar). The chance of two strikes at the same point is extremely small.

Shot noise works the same way. It's distributed uniformly across the entire
surface of the sensor. So the chance of it hitting any individual pixel is just
the overall area divided by the area of a single pixel. The chance of two hitting
at the same place is infinitesimal.

Noise (any random type) can be reduced by taking multiple sample images in
very rapid succession (camera electronics, not shutter presses). Add them all
together, then average the result to reduce the effect of noise pulses.

Note that smaller pixels may be more sensitive than larger ones (evidenced by
higher ISO rating), so more sensitive to weaker noise pulses.

- Leigh
 
Last edited:

JimKasson

Well-known member
I only read up throuh his discussion of signal-to-noise ratio (SNR).

SNR is a well-known phenomenon, commonly used in communications systems.
It's a measure of signal quality, as over a telephone line, radio, or data system.

Of significance is that it's averaged over time, usually several seconds, not
instantaneous. It certainly is not measured over the duration of a shot noise
event, which is a millionth of a second or a millionth of a millionth of a second.

Shot noise is random; think lightning. Lightning is the main source of "shot"
noise in a radio, as you know by listening to any AM station.

The chance of a lightning strike in your yard is very small (assuming it's flat with no
trees or similar). The chance of two strikes at the same point is extremely small.

Shot noise works the same way. It's distributed uniformly across the entire
surface of the sensor. So the chance of it hitting any individual pixel is just
the overall area divided by the area of a single pixel. The chance of two hitting
at the same place is infinitesimal.

Noise (any random type) can be reduced by taking multiple sample images in
very rapid succession (camera electronics, not shutter presses). Add them all
together, then average the result to reduce the effect of noise pulses.

Note that smaller pixels may be more sensitive than larger ones (evidenced by
higher ISO rating), so more sensitive to weaker noise pulses.

- Leigh
There is a fundamental misunderstanding here. Counting quanta -- like photons or electrons -- is inherently noisy. It is wrong to think of "shot noise events", unless you mean that there's an event every time a free electron appears in a sensor photodiode.

Here's a relevant quote from Wikipedia: "In optics, shot noise describes the fluctuations of the number of photons detected (or simply counted in the abstract) due to their occurrence independent of each other. This is therefore another consequence of discretization, in this case of the energy in the electromagnetic field in terms of photons. In the case of photon detection, the relevant process is the random conversion of photons into photo-electrons for instance, thus leading to a larger effective shot noise level when using a detector with a quantum efficiency below unity. Only in an exotic squeezed coherent state can the number of photons measured per unit time have fluctuations smaller than the square root of the expected number of photons counted in that period of time."

https://en.wikipedia.org/wiki/Shot_noise

RF noise induced by lightning has different statistics than shot noise.

Jim
 
Last edited:

JimKasson

Well-known member
This question assumes that a sensor and a camera are the same thing.

A sensor is a component. A camera is a chain of components and engineering decisions: [Lens coating > Lens elements/design > Aperture blade design > internal body coating > microlens size/shape > Anti aliasing filter > IR filter thickness, rolloff and cutoff characteristics > CFA design (see also p1 trichromatic) > sensor photo well size/design > heat-sinking and/or active cooling > A/D converter type/quality > A/D converter bit-depth and control parameters > frame averaging (if applicable) > black calibration recoording > debayering algorithm > color profile intention > color profile quality > deconvolution / detail finding algorithm > noise reduction based on black calibration file > noise reduction based on image data > sharpening]

The underlying tech/generation of sensor is certainly a big driver in dynamic range (i.e. noise), probably the biggest. But it is very far from the only driver.

In my highly biased opinion, Phase One has consistently gotten more out of every generation of sensor. Not necessarily because they are better engineers than at other camera companies, but because they prioritize image quality over all else. For example, if adding more heat sinking to a system will decrease noise by a modest amount, P1 will choose the increased image quality. If reading the image out slower will improve noise but decrease the speed of continuous shooting, P1 will choose the increased image quality. Other companies may make other choices when faced with such tradeoffs, which they'd be right to do so because of the priorities of their intended users. Note I am absolutely NOT saying other companies make cameras with low or even mediocre image quality; for the most part I'm distinguishing between modest differences, maybe best said as the difference between "very good" and "extraordinary". Most people (doing most kinds of photography for most kinds of reasons) are better off with a smaller/lighter/cheaper/faster camera that has "very good" image quality while Phase One focuses on the people who are okay with a larger/heavier/more-expensive/slower camera that has "extraordinary" image quality.
There are many things on your list which don't affect dynamic range measured at the raw file, but I take your meaning. To get an idea of what kinds of differences can occur with the same sensor in four different cameras, take a look at this:

http://photonstophotos.net/Charts/P...elblad H6D-50c,Hasselblad X1D-50c,Pentax 645Z

Jim
 

Abstraction

Well-known member
Ok, so back to the original question:

Will the shot noise increase as I mask the 4150 back and would that shot noise progressively equal the shot noise on the smaller sensor cameras as we go down the line?
 

Shashin

Well-known member
My question was the result of this article

https://www.dpreview.com/articles/8...e-shedding-some-light-on-the-sources-of-noise

As I understand it, all else being equal, the smaller sensors cuts of the same sensor should inherently produce more noise. Therefore, if this is true, then the logic suggests that if we physically tape parts of a Phase iq4 150 back (I'm using it as an example because it's the largest consumer sensor on the market and it is the latest generation sensor), then we should start seeing progressively more and more noise.

Is that the case? If it's not the case, then the whole article ìs just complete hogwash.

This is what I'm trying to wrap my head around.
Well, DPreview is a proponent of the Equivalency hypothesis. Unfortunately, they cannot tell the difference between cause and correlation believing sensor size in and of itself is a criteria. Yes, given the same technology (a variable they never really discuss) and same pixel number, the pixel size changes with the format size making the pixel on the larger senor more efficient. Their conclusion is senor size is the cause, even though their explanation earlier on would suggest it is actually pixel size. Once you have the same pixel size (and technology), then you are getting the same shot noise. Unfortunately, the Equivalency hypothesis has become ingrained in amateur photography, but there is more to photography and exposure than geometric relationships. This is why it is really difficult to explain why an APS-C camera, the Fuji X Pro2, has better noise performance in the DPreview studio scene than the Leica M10 that was released a year later if sensor size and "light gathering" was really a thing should that be the determining factors as they claim? (Naturally, there argument would be technology, but that is just the fudge factor to to cover the limits to to the Equivalency hypothesis. It is simply confirmation bias.)

I would probably go to another source for you information. Beyond a simple model to discuss how format relates to other attributes like depth of field and angle of view, the Equivalency hypothesis has limited use. Having worked with scientific instrumentation, pixel attributes are far more important than simple sensor size, although that is not irrelevant, but not in ways described with the Equivalency hypothesis. And once you start working with photon-multiplier tubes and APDs on confocal microscopes, what does sensor size mean? PMT and APDs have a pixel count of exactly one and the "format size" (scan area) does not change "light gathering."
 

Shashin

Well-known member
However when you print the full and cropped (taped) results all on the same size paper the noise will be more noticable when a smaller area of the sensor is used.
Noise is a generalized effect. So why would the noise be perceptible under one condition and not in the other? Does viewing distance increase or decrease noise?
 

pegelli

Well-known member
Noise is a generalized effect. So why would the noise be perceptible under one condition and not in the other? Does viewing distance increase or decrease noise?
Good question, but I forgot to mention that I was assuming equal viewing distance of all the outputs mentioned in the post you (partly) quoted.
 

JimKasson

Well-known member
Ok, so back to the original question:

Will the shot noise increase as I mask the 4150 back and would that shot noise progressively equal the shot noise on the smaller sensor cameras as we go down the line?
Larger sensors as a rule have greater FWCs, and therefore a higher SNR for shot noise, which is the square root of the electron count. But the situation you described holds the FWC constant as you make the sensor area smaller, so the per-pixel SNR will stay the same. However, at a constant print size, you'll have more pixels contributing to each square mm on the print with the larger sensor, and thus the SNR will be higher for the larger sensor.

Jim
 
However, at a constant print size, you'll have more pixels contributing to each square mm on the print with the larger sensor, and thus the SNR will be higher for the larger sensor.
More pixels, more pixels with noise in the absolute sense, but why would the ratio be any higher?
 
Top