Thanks for taking the time to formulate that explanation gcogger.
However, I must confess to having a difficult time getting my head around it. You have added another variable with aperture into the equation. I thought, maybe wrongfully, that the image circle was constant regardless of aperture. And, I still cannot see how the images would be the same in your example.
Do you know of any link that graphically explains this?
...
b)
For everything to be equivalent we have to mix in the aperture value. This has nothing to do with the image circle but the DOF.Jonas
Yes that's exactly it - the aperture needs to change if you want to keep the DOF the same (otherwise the images will not be identical). Sorry Ron, I forgot to explain why I was changing the aperture. It's difficult to explain why the captured images would be the same without some diagrams, but I'll have a go
Let's just think in one dimension at the moment - the height of an object. Say a 50mm lens is 'looking' at an object - it projects an image of the object, at the sensor, that is 10mm high. A 100mm lens will, in this case, project an image 20mm high.
Assuming the sensors are 12MP (4000x3000 pixels, to make the maths easy) - the 4/3 sensor is 13.5mm high, so our mythical 4:3 ratio 'full frame' sensor will be 27mm high.
The 50mm on the 4/3 camera will therefore project an image 10mm high on a 13.5mm height sensor - about 3/4 of the frame height. The object will be recorded by ~2250 pixels, occupying 3/4 of the height of the frame.
The 100mm on the 'full frame' camera will project an image 20mm high on a 27mm height sensor - again, about 3/4 of the frame height. The object will (as before) be recorded by ~2250 pixels, occupying 3/4 of the height of the frame.
Obviously, the same calculation applies for the width of an object. The data recorded in the image file will therefore be the same in both cases.
Note that the size of the image circle is not relevant to any of this (as long as it is big enough to cover the sensor) and neither is the aperture - the size of the image projected at the sensor is determined just by the focal length. The reason we have to adjust the aperture to achieve the same DOF is a little more complex. If you are wondering why it's different in the 2 cases, it has to do with the fact that DOF is determined by how sharp something looks
in the final print, not at the sensor.
Also, as Jonas mentioned, this shows that any lens aberrations or softness are twice as visible on the 4/3 sensor. An edge blurred to 0.1mm at the sensor will cover ~23 pixels on the 4/3 sensor, but only ~11 on the 'full frame'. This is one of the reasons full frame sensors are nice
Offsetting this is the fact that the 4/3 sensor only sees the center 50% of the image circle, where aberrations tend to be lower. As a result, using the same lens, a full frame camera tends to be better in the centre but may be worse at the edges than a 4/3 camera, which is more even across the frame.
OK, I'll stop now before everyone nods off
(Oh dear, too late...)
Graeme