The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Why is (small) MF color/IQ better than FF?

MGrayson

Subscriber and Workshop Member
Hi Matt,

I am pretty sure that SL and the S(typ 007) use different sensor designs. To be more precise I would suggest that the S (typp 007) uses the sensor designed by CMOSIS for Leica, using ST Semiconductor technology, while i would guess the SL uses technology from TowerJazz, a company owned by Leica's technology partner Panasonic.

It is quite obvious if you check the image below:
View attachment 139775


The bump you see on the SL and the Q indicates that the sensors have 'dual gain conversion', patented by Aptina as DR-Pix.

So S and SL use very different pixel designs.

Color depends not on sensor design though, it is much more about colour profile IR and UV filtering and CFA designs.

Most sensor makers use CFA filter compositions from FujiFilm. IR and UV filtering may come Corning Glass or Hoya, just as examples.

Best regards
Erik
Erik,

Thank you for that information. The only thing I was certain of was the same pixel size/density.

Matt
 

Boinger

Active member
Let's put it simply.

Larger sensor's gather more light thus gather more data. Period.

You cannot circumvent physics, and we are already approaching what is possible with optics. So a larger sensor will always have an advantage in terms of gathering more data.

It will also give an advantage in apparent noise for a given print size.

I do feel that MF files give better color data because it does capture more color data for the given surface area of any said feature for an equivalent field of view.

You cannot get around physics. Now if you feel that these advantages are for you or not that is for you to decide.
 

pegelli

Well-known member
Let's put it simply.

Larger sensor's gather more light thus gather more data. Period.
If this is true how do you explain Paratom's observation further above in this thread that a 24x36 mm crop of a Leica S image is still better than that of an SL?
The crop on the S sensor received the same amount of light as the full sensor of the SL :confused:
 

Boinger

Active member
If this is true how do you explain Paratom's observation further above in this thread that a 24x36 mm crop of a Leica S image is still better than that of an SL?
The crop on the S sensor received the same amount of light as the full sensor of the SL :confused:
It's quite simple really the sensor configuration is not the same.

All things being equal larger sensor's will always have and advantage.

It is the same reason we build larger telescopes and arrays of radio telescopes etc.

You cannot circumvent physics.
 

pegelli

Well-known member
It's quite simple really the sensor configuration is not the same.

All things being equal larger sensor's will always have and advantage.

It is the same reason we build larger telescopes and arrays of radio telescopes etc.

You cannot circumvent physics.
Physics (or math, don't care) tells me a 24x36 mm crop from a larger sensor receives the same amount of light (assuming the same lens and aperture) as a 24x36 mm sensor.

So why is the crop from the large sensor better (according to the experience posted above)?

I'm not trying to argue, I'm trying to understand the connection between theory and practical results.
 
Last edited:

Paratom

Well-known member
Physics (or math, don't care) tells me a 24x36 mm crop from a larger sensor receives the same amount of light (assuming the same lens and aperture) as a 24x36 mm sensor.

So why is the crop from the large sensor better (according to the experience posted above)?

I'm not trying to argue, I'm trying to understand the connection between theory and practical results.
This is exactly the question I ask myself.
A possible answer (just my guess) would be that MF sensors are not only larger, they are also better in some other regards (dynamic range for example).
Maybe (just maybe) the cost pressure for FF and smaller sensor is bigger than that for medium format sensor, where you can get higher price from the customer.
 

JoelM

Well-known member
Are the pixels that receive the same light the same? I think that pixel size is relevant.

Joel
 

pegelli

Well-known member
Are the pixels that receive the same light the same? I think that pixel size is relevant.

Joel
But the pixel size of the S and the SL in the example from Paratom are also the same :confused:

So same pixel size, using the same area of the sensor and the results are different. :lecture:

Usually "size matters" but in this case I don't see which size yet
 

Godfrey

Well-known member
I've given up on trying to get that "ultimate" quality from anything. It's simply not important to me anymore.

I spend my time learning how to get what *I* want out of all the cameras at my disposal, and also learning what I can't get out of specific cameras. I've had a hankering for something like the X1D with 21mm lens for a long time, but I'm not wealthy enough to want to spend that money. So I make do with what I have, which all works fine anyway, and make my photographs, ultimate quality be damned.

Using the standard camera profiles and default settings in Lightroom, or Photos, or On1, or whatever, is always going to lead you down a particular path. If you want something more than that, you have to put the effort in to make it happen yourself. I do that on regular occasion, when I deem it important to what I want to make happen.

It just seems to me that if you're always reaching for the next golden ring, you miss a lot of other stuff in the effort. I want the other stuff, and I'll leave the golden ring to those for whom it is most important. :D

G

"No matter where you go, there you are."
 

dougpeterson

Workshop Member
People naturally want a simple single-factor explanation for why they like the images coming from one camera system more than another. It's very very rarely that simple.
The Entire Rest of Thread said:
People wanting to decide on a single-factor explanation
:toocool:

After a decade of helping hundreds of photographers evaluate hundreds of combinations of cameras and lenses for their use case, I'm very confident...

Color performance is not explained by a single factor.

Color is complicated. Almost anyone who says they really understand it is a victim of the Dunning Kruger Effect. See post 6 for some of the factors that influence image quality in general, and color as a consequence.
 

Shashin

Well-known member
All things being equal larger sensor's will always have and advantage.

It is the same reason we build larger telescopes and arrays of radio telescopes etc.

You cannot circumvent physics.
Actually, larger scopes are built to balance magnification, resolving power, and exposure. Chip size in scientific imaging is not an issue--smaller sensors actually can have the advantage as magnifications do not need to be so high, whether with telescope or microscope. "Light gathering" is a term from amateur astronomy, the concept does not actually translate well to imaging.

The problem with equivalency is two systems can never be equal and not all the variables be be made equal--there is always at least one that diverges. The only way to make an equivalency argument is by making a normative judgement on which variables need to be fixed. And while physics is certainly and important part of the equation, there is an important cognitive aspect to photography, which physics alone cannot explain, for example, something as simple as DoF is a perceptual characteristic of an image, not an optic property of a lens/camera. But perception, as indicated in this thread is far more complex. Which is why cameras are based on photometric units, not radiometric, simply because the human visual system modifies what it sees. Color is purely a biological response to light, not a physical one--color does not exist outside our perception of it.
 

Boinger

Active member
Actually, larger scopes are built to balance magnification, resolving power, and exposure. Chip size in scientific imaging is not an issue--smaller sensors actually can have the advantage as magnifications do not need to be so high, whether with telescope or microscope. "Light gathering" is a term from amateur astronomy, the concept does not actually translate well to imaging.

The problem with equivalency is two systems can never be equal and not all the variables be be made equal--there is always at least one that diverges. The only way to make an equivalency argument is by making a normative judgement on which variables need to be fixed. And while physics is certainly and important part of the equation, there is an important cognitive aspect to photography, which physics alone cannot explain, for example, something as simple as DoF is a perceptual characteristic of an image, not an optic property of a lens/camera. But perception, as indicated in this thread is far more complex. Which is why cameras are based on photometric units, not radiometric, simply because the human visual system modifies what it sees. Color is purely a biological response to light, not a physical one--color does not exist outside our perception of it.
Ofcourse you can make an equality statement. If we are talking about purely what has more light gathering ability. If you assume the same sensor but 2 versions of it. One smaller and one larger the larger will always have the advantage.

Yes I am aware there are numerous amount of scientific instruments out there, and there are different applications for each technology. But the fact remains if you have more surface area for any given item you will gather more data. It is why we have to use multiple radio telescopes in different areas to infact increase the resolution.

While photography may be categorized in photometric units, sensors see in radiometric units. They capture a light's wavelength and intensity. Color is a human biological response only in the sense that is how we perceive the wavelengths of light. If you take away the human visual system the specific wavelength and intensity of light still exists. We construct sensor's to emulate our perception yes, but all of this can be quantified and measured.

I also said the differences may be minue but scientificly they are present (in regards to sensors).
 

Shashin

Well-known member
Ofcourse you can make an equality statement. If we are talking about purely what has more light gathering ability. If you assume the same sensor but 2 versions of it. One smaller and one larger the larger will always have the advantage.
You can always make an equality statement? How do you have equal magnification and angle of view or equal exposure and depth of field or equal pixel resolution and pixel pitch? And that is the point, the hypothesis is just a normative expression, where someone selects the variables they believe are "important" or at least supports their argument. (And the equivalency thing was born on the internet for people trying to "prove" why their particular choices were superior, which is why there are disagreements to which variables should be fixed.)

And when you say two sensors of the same version, do you mean each having the same pixel resolution or pixel pitch? And if I make a print of each and no one can visually perceive a difference, then does the larger sensor actually have an advantage? I had two 44" x 16 foot panorama images in the imaging center I ran, one from an APS-C camera and one from a MFD, and no one spotted the difference. They were simply viewed as two high-quality images. (Equivalency does not deal with output.)

And this is where the physics argument finally breaks down. Ultimately it is whether the image is pleasing--that is the technical criteria which is about the cognitive perception of the totality of the image. The variables that determine that tend to the subjective ones of contrast, sharpness, and color, rather than the technical specification of pixel resolution and DR.

I agree sensor size is a factor, but not because of "light gathering" (none of the technical text I have on the photographic process ever refers to "light gathering" and it certainly is not a requirement because of sensor size in scientific imaging). It limits the pixel resolution with pixel pitch, it is affected by the frequency and amplitude of the image projected by the optics, both important variables. But since the photographer can chose how to take a photograph and no one set of variables is actually better than another (there are no ideal shutter speeds or apertures in and of themselves) the amount of light gathered can be equal regardless of sensor size.

Finally, equivalency does not actually deal with how camera systems are used. Equivalency criteria are abstract. When I chose a system, it is weighed among its performance, functionality, use, and output as well as the actual choices available (the problem with the "equal technology" criteria). This is why I have more than one format. And one reason I went for an X Pro2 over a 35mm Leica M because the Fuji sensor performed better.

So here is the Zen Koan: if I make two images and no one can tell the difference, is there an advantage to either one?
 

pegelli

Well-known member
I don't think this thread is about "equivalency", it's about the perceived colour quality difference between a cropped image of a MF sensor vs. the same image taken with a full sensor of the same size (actually the Leica S vs. the Leica SL). Paratom says this is the case and I don't think there is a reason to doubt him.

Dougpeterson said it's a multitude of factors, all having to do with the quality of "every step" of producing the image (from lens to final image on the card)

Some say it's caused by sensor size, some say it's caused by pixel size, but there are some examples and reasoning that shed some doubt on those "single (sensor) effect" theories.

I currently believe Doug's theory simply because I haven't found a convincing argument for any of the "single effect" theories postulated in this thread.

Disclaimer: I'm only following and reacting in this thread because I find it interesting, I have no MF system and cannot help with any tests or examples to come to a final conclusion.
 

FelixCLC

New member
(Warning, the following is from an engineer in training)

When reading the earlier posts I the thread a few people made wonderful posts about how different parts of the complete pipeline will affect different parts of the final result.
I want to specifically address one portion of the SL vs S007 portion of this discussion. With S007 being cropped to the same size you (may depending on the guts of the unit) have a higher amount of heat dissipation per pixel/amplifier pair, therefore leading to "cleaner" conversion. With how sensitive these sensors are, it may be that by lowering the readout noise in this portion of the pipeline you main gain up to around ~half a stop of signal in the lowest stop of DR.

(back to photographer mode)

even if the above is true, as Doug and Matt pointed out, truthfully the scene, the post processing style, amount the rest of the pipeline will certainly make more of a difference.

In any case, unless someone has a pair of cameras and sensors they would be willing to have torn apart so that we could play with these systems and try to see if a tangible equivalence statement can be made, I'm thinking we're back into the inferno.


(minor idea; IIRC the iq1 50mp and iq1 100 share the same pixel pitch and are from the same generation of Sony CMOS. They both being phase backs, it is reasonable to say that both would have been given the highest quality treatment. Perhaps a way to truly put this to bed would be a comparison of the SOOC files from these 2 DB? I can see a wonderfully eye catching title of "Putting the sensor size argument to rest")
 

dchew

Well-known member
(Warning, the following is from an engineer in training)


(minor idea; IIRC the iq1 50mp and iq1 100 share the same pixel pitch and are from the same generation of Sony CMOS. They both being phase backs, it is reasonable to say that both would have been given the highest quality treatment. Perhaps a way to truly put this to bed would be a comparison of the SOOC files from these 2 DB? I can see a wonderfully eye catching title of "Putting the sensor size argument to rest")
That would be nice, but the 50mp and 100mp backs do not have the same pixel pitch and are not just different crops of the same sensor. If I may quote Doug's / DT's website:
https://www.dtcommercialphoto.com/xf-100mp-camera-system/#specs

50mp = 5.3 microns
100mp = 4.6 microns

Dave
 

FelixCLC

New member
My mistake, was thinking about the full frame 150mp and the 100mp 44x33 sensors! those do share the same pixel pitch, but I don't know of any integrator that is using both of those sensors in their lineup.


Alternatively it may be easier to do this sort of experiment between the iq 140 and iq 160 which both share the same 6^2 micron pixel pitch




That would be nice, but the 50mp and 100mp backs do not have the same pixel pitch and are not just different crops of the same sensor. If I may quote Doug's / DT's website:
https://www.dtcommercialphoto.com/xf-100mp-camera-system/#specs

50mp = 5.3 microns
100mp = 4.6 microns

Dave
 

Shashin

Well-known member
When reading the earlier posts I the thread a few people made wonderful posts about how different parts of the complete pipeline will affect different parts of the final result...even if the above is true, as Doug and Matt pointed out, truthfully the scene, the post processing style, amount the rest of the pipeline will certainly make more of a difference.
Including the quality of light in the scene...

When you have to take two "identical" images and compare them side by side at 100% to see the differences, then just about everything else beyond simply sensor size is going to make a greater impact. And that is the rub, ultimately, photography is a perceptual problem limited by the human visual/cognitive systems.
 

FelixCLC

New member
Shashin I'm absolutely in agreement with you.

My reasoning behind wanting to do this is simply to have as close to an example without caveats to point to for when the semi-eternal pixel size vs sensor size argument resurfaces. Not to Mention that it would be the most concrete way of answering the question posed at the beginning of the thread! in this particular case we may actually be able to get as close to an exact test as possible. Same generation, same pixel size (presumably) same CFAs and other internal parts (ADC, low level amplifiers etc. )


Including the quality of light in the scene...

When you have to take two "identical" images and compare them side by side at 100% to see the differences, then just about everything else beyond simply sensor size is going to make a greater impact. And that is the rub, ultimately, photography is a perceptual problem limited by the human visual/cognitive systems.
 
Top