The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

The effects of pixel sizes

ErikKaffehr

Well-known member
Hi,

There are now and than discussions about pixel density being good or bad. Here is a quick summary:

High MP / high pixel density beenefits:

  • Higher resolution, almost never limited by lens
  • Somewhat better sharpness
  • More tolerant to sharpening
  • Less artefacts

Here are the disadvantages of high MP / high pixel density

  • Somewhat reduced dynamic range (1/2 EV for each doubling)
  • Sensor may be more prone to cross talk at large beam angles on technical cameras

Tonality would not be affected by pixel density.

Let's look at some basic facts. In anything like good light, noise in highlights and midtones will be dominated by what is known as shot noise. Shot noise depends on the statistical variation of photons arriving at each pixel site.

For good photon statistics we need lot of photons. Many decently good sensors have a full well capacity of about 60000 e-, which means that they can detect about 60000 photons before they go into nonlinear behavior. Let's assume that 64000, because that is a nice number...

Normally, we would have something like 3EV margin for highlights. So mid tones would have 64000 / (2^3) -> 8000 detected photons. Photon arrival follows a Poisson distribution and such a distribution has a standard of deviation of sqrt(samples) that is srqt(8000) -> 89.4. That means that signal noise ratio would be 89.4, too.

Now, let us assume that we make the pixels half size. That means that we can fit 4 small pixels within the large pixels. They will detect 16000 photons each, mid tone SNR will be 44.7.

It seems that the smaller pixels are noisier, and that would be the case when comparing the two at actual pixels. Let's look at three cameras from Sony that probably use similar generation pixel designs:

ScreenView.jpg

The A7sII at 12 MP has least noise and the A7rII has most noise. But, actual pixels means that we look at different magnifications. Would we look at those images on a 24" screen with 1920x1200 resolution the full image size would be:

A7sII -> 45" wide
A9 -> 64" wide
A7rII -> 84" wide

If we would view the images at the same size the pixels would be binned. Let say that we print an A2 image at 360 PPI. Image dimensions would be 23.4 * 360 x 16.5X 360 -> 8424 x 5940 that is 50 MP. So in this case the cameras would use:

  • A7sII 12.2 / 50 -> 0.24 pixels per printed pixel
  • A9 24/50 -> 0.48 pixels per printed pixel
  • A7rII 42/50 -> 0.84 pixels per printed pixel

We could assume that FWC capacity would vary with pixels size, assuming around 64000 on the A7rII we would have 220000 e- on the A7sII and 112000 e- on the A9

If we now calculate the photons detected for each pixel in the A2 print, still assuming 3EV under saturation we get:

  • A7sII -> 220000 / 8 * 0.24 -> 6600
  • A9 -> 112000 / 8 * 0.48 -> 6720
  • a7rII -> 64000 7 8 * 0.84 -> 6720

If we look at the DxO-mark data at "print setting" we get:

PrintView.jpg

Some older Phase One backs had a feature called Sensor+, and did hardware binning of four pixels into at a certain ISO. Let's check in DxO-mark. The P65+ has sensor plus while the P45+ does not. Let's look the tonal range in "print mode":
PrintView.jpg

sensor plus doesn't contribute much to tonality, in print view:

DynamicRange.jpg

Here DR takes a healthy bump with sensor plus. How come?

Dynamic range is calculated as FWC / read noise. We may assume that FWC capacity is still around 64000 e-, those are large pixels but a bit old designs.

We could assume a readout noise of 16 e-, that was fairly typical of that generation of CCD sensors.

So DR for the unbinned sensor would be 64000/16 = 4000 -> 12 EV.
Binning in hardware we would still have 16 e- readout noise. So DR would now be 16000 -> 14EV.

With software binning FWC would add upp, but read noise would also add upp, in quadrature. So dynamic range would be:

4 * 64000 / (sqrt(4 * 16^2)) -> 256000 / 32 -> 8000 -> 13EV.

Now let's look at resolution. The images below were shot with a Hasselblad Planar 100/3.5 on two different sensors, a 6.8 micron P45+ and with the Sony A7rII at 4.5 microns.


In this image the 4.5 micron sensor reproduces the subjects cleanly, indicating that the sensor is a good fit for the lens. The 6.8 micron image on the right has a lot of artifacts.

Downsizing the 4.5 micron image to same size as the 6.8 micron image the small pixel image is still much clearer than the large pixel image.



The two images below were both shot with the Planar 150/4, one at f/8 and the other at f/22:
SD1.jpg

In the left side image, the lens ouresolves the sensor. As you can see, the straight lines start to bend once they pass a central circle. That circle is the resolution of the sensor. What is beyond should be a grey mass of unresolved detail. At f/8 the lens produces fake detail in lieu of real detail the sensor cannot resolve. In the right side image, diffraction would take care of excess resolution, giving a soft image without much unresolved detail.

The next pair of samples was shot on the A7rII using the same lens at f/5.6 and f/11.
SD2.JPG
The f/5.6 image has a lot of both color and bent line artifacts. The f/11 image has still some bent line artifacts but color artifacts are gone.

The last two pairs of images demonstrate the advantage of going from 39MP to around 82MP on a 37x49 mm sensor using a high end lens from the film era.

Summing up:

In general, having more pixels is an advantage. There is a small loss in DR when pixel size is reduced. Modern pixel designs are probably a good compromise.

Best regards
Erik Kaffehr
 
Last edited:
Top