Bearing in mind that the Quattro sensor is still based on 3 layer architecture, I wonder why Green and Red layers don't capture luminance information? Is it cancelled out somehow to avoid file size bloating?
"While the top layer captures both luminance and color information, the middle and bottom layers with their larger surface area capture color information only." (Source:
Technology | dp Quattro | Cameras | SIGMA GLOBAL VISION)
Well, Sigma really does cancel that luminance detail out to slightly reduce high ISO noise (1 stop) and processing time. From IR interview:
DE: I see. Because each layer has all the colors, you can correlate. So you can look at the top layer and say "Okay, we know we've got some red here," then you look at the red layer to see how much is there, and you can sort of take that out.
SR: Precisely -- it creates the correlation automatically, and therefore, you can remove some redundant information that you didn't actually need. Part of the advantage that you get from that is that you are then able to increase the signal.
Sigma Q&A Part II: Does Foveon’s Quattro sensor really out-resolve conventional 36-megapixel chips?
That's where the Quattro failure lies in (in my eyes) - omitting red and green color channels luminance detail. If Quattro sensor architecture is indeed as presented, there is a chance for Sigma to adjust image processing for collecting full RGB information, even though high ISO noise advantage and processing speed will be sacrificed. If implemented, I would consider adding a Quattro-based camera to my arsenal.