Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!
Erik,Hi Matt,
I am pretty sure that SL and the S(typ 007) use different sensor designs. To be more precise I would suggest that the S (typp 007) uses the sensor designed by CMOSIS for Leica, using ST Semiconductor technology, while i would guess the SL uses technology from TowerJazz, a company owned by Leica's technology partner Panasonic.
It is quite obvious if you check the image below:
View attachment 139775
The bump you see on the SL and the Q indicates that the sensors have 'dual gain conversion', patented by Aptina as DR-Pix.
So S and SL use very different pixel designs.
Color depends not on sensor design though, it is much more about colour profile IR and UV filtering and CFA designs.
Most sensor makers use CFA filter compositions from FujiFilm. IR and UV filtering may come Corning Glass or Hoya, just as examples.
Best regards
Erik
If this is true how do you explain Paratom's observation further above in this thread that a 24x36 mm crop of a Leica S image is still better than that of an SL?Let's put it simply.
Larger sensor's gather more light thus gather more data. Period.
It's quite simple really the sensor configuration is not the same.If this is true how do you explain Paratom's observation further above in this thread that a 24x36 mm crop of a Leica S image is still better than that of an SL?
The crop on the S sensor received the same amount of light as the full sensor of the SL
Physics (or math, don't care) tells me a 24x36 mm crop from a larger sensor receives the same amount of light (assuming the same lens and aperture) as a 24x36 mm sensor.It's quite simple really the sensor configuration is not the same.
All things being equal larger sensor's will always have and advantage.
It is the same reason we build larger telescopes and arrays of radio telescopes etc.
You cannot circumvent physics.
This is exactly the question I ask myself.Physics (or math, don't care) tells me a 24x36 mm crop from a larger sensor receives the same amount of light (assuming the same lens and aperture) as a 24x36 mm sensor.
So why is the crop from the large sensor better (according to the experience posted above)?
I'm not trying to argue, I'm trying to understand the connection between theory and practical results.
But the pixel size of the S and the SL in the example from Paratom are also the sameAre the pixels that receive the same light the same? I think that pixel size is relevant.
Joel
People naturally want a simple single-factor explanation for why they like the images coming from one camera system more than another. It's very very rarely that simple.
:toocool:The Entire Rest of Thread said:People wanting to decide on a single-factor explanation
Actually, larger scopes are built to balance magnification, resolving power, and exposure. Chip size in scientific imaging is not an issue--smaller sensors actually can have the advantage as magnifications do not need to be so high, whether with telescope or microscope. "Light gathering" is a term from amateur astronomy, the concept does not actually translate well to imaging.All things being equal larger sensor's will always have and advantage.
It is the same reason we build larger telescopes and arrays of radio telescopes etc.
You cannot circumvent physics.
Ofcourse you can make an equality statement. If we are talking about purely what has more light gathering ability. If you assume the same sensor but 2 versions of it. One smaller and one larger the larger will always have the advantage.Actually, larger scopes are built to balance magnification, resolving power, and exposure. Chip size in scientific imaging is not an issue--smaller sensors actually can have the advantage as magnifications do not need to be so high, whether with telescope or microscope. "Light gathering" is a term from amateur astronomy, the concept does not actually translate well to imaging.
The problem with equivalency is two systems can never be equal and not all the variables be be made equal--there is always at least one that diverges. The only way to make an equivalency argument is by making a normative judgement on which variables need to be fixed. And while physics is certainly and important part of the equation, there is an important cognitive aspect to photography, which physics alone cannot explain, for example, something as simple as DoF is a perceptual characteristic of an image, not an optic property of a lens/camera. But perception, as indicated in this thread is far more complex. Which is why cameras are based on photometric units, not radiometric, simply because the human visual system modifies what it sees. Color is purely a biological response to light, not a physical one--color does not exist outside our perception of it.
You can always make an equality statement? How do you have equal magnification and angle of view or equal exposure and depth of field or equal pixel resolution and pixel pitch? And that is the point, the hypothesis is just a normative expression, where someone selects the variables they believe are "important" or at least supports their argument. (And the equivalency thing was born on the internet for people trying to "prove" why their particular choices were superior, which is why there are disagreements to which variables should be fixed.)Ofcourse you can make an equality statement. If we are talking about purely what has more light gathering ability. If you assume the same sensor but 2 versions of it. One smaller and one larger the larger will always have the advantage.
That would be nice, but the 50mp and 100mp backs do not have the same pixel pitch and are not just different crops of the same sensor. If I may quote Doug's / DT's website:(Warning, the following is from an engineer in training)
(minor idea; IIRC the iq1 50mp and iq1 100 share the same pixel pitch and are from the same generation of Sony CMOS. They both being phase backs, it is reasonable to say that both would have been given the highest quality treatment. Perhaps a way to truly put this to bed would be a comparison of the SOOC files from these 2 DB? I can see a wonderfully eye catching title of "Putting the sensor size argument to rest")
That would be nice, but the 50mp and 100mp backs do not have the same pixel pitch and are not just different crops of the same sensor. If I may quote Doug's / DT's website:
https://www.dtcommercialphoto.com/xf-100mp-camera-system/#specs
50mp = 5.3 microns
100mp = 4.6 microns
Dave
Including the quality of light in the scene...When reading the earlier posts I the thread a few people made wonderful posts about how different parts of the complete pipeline will affect different parts of the final result...even if the above is true, as Doug and Matt pointed out, truthfully the scene, the post processing style, amount the rest of the pipeline will certainly make more of a difference.
Including the quality of light in the scene...
When you have to take two "identical" images and compare them side by side at 100% to see the differences, then just about everything else beyond simply sensor size is going to make a greater impact. And that is the rub, ultimately, photography is a perceptual problem limited by the human visual/cognitive systems.