The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Leica SL adds Multishot capability 187MP

MGrayson

Subscriber and Workshop Member
Sorry guys, one last experiment. How does the pixel shift output differ from a real half-the-pixel-size image?



The faint blue lines are the pixel shift response. So you don't quite get the same sharpness you'd get with twice as many pixels in each linear dimensions. Frankly, it's better than I expected without further processing.

Matt
 

gerald.d

Well-known member
Half way between theory and pictures are simplified models. Here's a simple light/dark boundary with and without pixel shift.

First, without:



On top is "reality". Then a depiction of the pixel wells, and finally, the light gathered by those wells.
In the graph above where you show the light gathered by the wells (this is prior to shifting remember), why does a pixel that has no light hitting it record light hitting it? You are showing 4.5 pixels here, yes? And the step change in the light value occurs 1/4 of the way into the third pixel?

Why would the output from the pixels shown not be 0, 0, 0.75, 1, 1..? This is of course what your data represents if it were presented as a point plot, or a stepped line, but it is not. It is presented as a line graph - the inference being that the value of the amount of light recorded by each pixel, which you show in your diagram to have dimensionality, is the integral under the line.

Following images removed for ease of reading.

Now we add shifted pixels.

So right out of the box, we see the pixel-shifted plot clearly gives a better sense of the location of the dark/light boundary.

But wait! What happens if we upsample the unshifted image? Here is upsampling using Mathematica's cubic interpolation of the unshifted output (in blue) against the pixel-shifted output (in orange). The blue curve doesn't require an electronic shutter or a motionless subject. Also bear in mind that we're viewing this at about 100x magnification. Our upsampled pixels here are 1/2 inch wide!


Actually, the difference is visible at only 100%, I'll spare you the computer generated images. But then, I'm using the lowest tech upsampling.

Matt
Why are you only working with one dimensional pixels?
What are your assumptions about the colour of the light?
What are your assumptions about the way the raw data captured by the pixels behind the Bayer array?
What are your assumptions about the way that raw data from the 8 captured images is interpolated and interpreted?

Now I have no idea how the S1R/SL2 take the 8 captured RAW files and creates a single 4x resolution RAW file output, but the fact of the matter is that they do have the original Bayer arrayed data from each of the 8 files to work with, and I would assume that they actually take advantage of that data when creating the output RAW. How is Mathematica going to do a better job if it is working solely with a single file that has already had Bayer interpolation algorithms applied (and that presumably would be rather tough to reverse engineer)?

I'd really like to try to understand this better, so perhaps that would best be done by using a diagram. I will work on the assumption that you show the light step at 1/4 of the way across your pixel wells to perhaps indicate that the light step occurs at the half-pixel position both horizontally and vertically.

To simplify things in the first instance, let's assume that there is also a step-change back to 0 - we can deal with the scenario where there is a step change from 0 to 1, and it stays at 1, later, if need be.

Here's the diagram showing how sensor is shifted to take the 8 images (I'm pretty sure this is correct, but it is an assumption on my part. For the first set of four captures the sensor shifts one pixel horizontally, one pixel vertically, and one pixel diagonally; then it shifts half a pixel diagonally, and repeats the pattern).



"X" indicates the quarter of the upper left pixel (red on the Bayer array) that has a light value of 1 (i.e. ignoring interpolation the value for the light hitting this pixel would be 0.25), all other parts of the sensor can be assumed to have a light value of 0.

(It's simpler to draw this with a static sensor and the pixel in question moving around, but of course what is actually going on is that "X" remains in the same position as the sensor shifts underneath it.)

My first (genuine) question is this - based on the calculations behind the graphs you share, what are the RGB light intensity values for the four pixels that result when the 8 shifted images are combined into a 4x resolution file (separately calculated for incident red, green, blue and white light; both for actual shifting - taking into account Bayer interpolation algorithms and the algorithms used by the S1R/SL2 when creating the 4x resolution file, and for Mathematica upsampling of just the top left plot)?

My second (facetious) question is - if Mathematica's upsampling is so good, what is the cut-off resolution where we no longer need to worry about capturing actual resolution on the sensor?

I do of course recognise that possibly the more realistic model presented and questions raised are too complex to answer, because there are too many unknowns. But if that is the case, I would perhaps suggest that if we simplify a scenario too far, the results of that over-simplification may well turn out to have little to no appication in a real world scenario.

Kind regards,


Gerald.
 

MGrayson

Subscriber and Workshop Member
In the graph above where you show the light gathered by the wells (this is prior to shifting remember), why does a pixel that has no light hitting it record light hitting it? You are showing 4.5 pixels here, yes? And the step change in the light value occurs 1/4 of the way into the third pixel?

Why would the output from the pixels shown not be 0, 0, 0.75, 1, 1..? This is of course what your data represents if it were presented as a point plot, or a stepped line, but it is not. It is presented as a line graph - the inference being that the value of the amount of light recorded by each pixel, which you show in your diagram to have dimensionality, is the integral under the line.

It's harder to compare bar charts or point plots than it is to compare line plots. No inference intended.

Following images removed for ease of reading.



Why are you only working with one dimensional pixels?
It's a simplified model.
What are your assumptions about the colour of the light?
Monochromatic.
What are your assumptions about the way the raw data captured by the pixels behind the Bayer array?
No Bayer array.
What are your assumptions about the way that raw data from the 8 captured images is interpolated and interpreted?
I don't even know how an unshifted Bayer demosaicing works. The standard convolution has been surpassed, and I do not presume to speak for current engineering. Hence the simplified model.

Now I have no idea how the S1R/SL2 take the 8 captured RAW files and creates a single 4x resolution RAW file output, but the fact of the matter is that they do have the original Bayer arrayed data from each of the 8 files to work with, and I would assume that they actually take advantage of that data when creating the output RAW. How is Mathematica going to do a better job if it is working solely with a single file that has already had Bayer interpolation algorithms applied (and that presumably would be rather tough to reverse engineer)?
It isn't.

I'd really like to try to understand this better, so perhaps that would best be done by using a diagram. I will work on the assumption that you show the light step at 1/4 of the way across your pixel wells to perhaps indicate that the light step occurs at the half-pixel position both horizontally and vertically.

What you show below is great for color fidelity (I say that somewhere many posts ago). What I was investigating was the claim - that seems obvious, but is theoretically false - that pixel shift increases resolution.

To simplify things in the first instance, let's assume that there is also a step-change back to 0 - we can deal with the scenario where there is a step change from 0 to 1, and it stays at 1, later, if need be.

Here's the diagram showing how sensor is shifted to take the 8 images (I'm pretty sure this is correct, but it is an assumption on my part. For the first set of four captures the sensor shifts one pixel horizontally, one pixel vertically, and one pixel diagonally; then it shifts half a pixel diagonally, and repeats the pattern).



"X" indicates the quarter of the upper left pixel (red on the Bayer array) that has a light value of 1 (i.e. ignoring interpolation the value for the light hitting this pixel would be 0.25), all other parts of the sensor can be assumed to have a light value of 0.

(It's simpler to draw this with a static sensor and the pixel in question moving around, but of course what is actually going on is that "X" remains in the same position as the sensor shifts underneath it.)

Covering each color is great. I don't know how the other shifts that overlap several different color pixels are used in the image reconstruction. It is undoubtably important to increasing resolution rather than "merely" improving color.

My first (genuine) question is this - based on the calculations behind the graphs you share, what are the RGB light intensity values for the four pixels that result when the 8 shifted images are combined into a 4x resolution file (separately calculated for incident red, green, blue and white light; both for actual shifting - taking into account Bayer interpolation algorithms and the algorithms used by the S1R/SL2 when creating the 4x resolution file, and for Mathematica upsampling of just the top left plot)?

Again, I am not dealing with color. Only linear resolution. I wish I knew more about the question you ask, as it is the REAL question. I doubt we'll find anyone who actually knows the answer who is allowed to tell us. There are probably papers out there giving some algorithm. It may be very complex. It may be simple due to some fortunate trick. Maybe it's as simple as inverting the matrix of pixel values from different colored images. I have the bad habit of solving these problems from first principles rather than looking at the literature. Sometimes I find new things that way. Sometimes I miss things everyone else knows. On balance, it has served me well.

My second (facetious) question is - if Mathematica's upsampling is so good, what is the cut-off resolution where we no longer need to worry about capturing actual resolution on the sensor?

I know too much about bad interpolation techniques and the pitfalls of relying on them. Of course, Finance and Photography are different, and what is good for one may be bad for the other. This is another reason I stick to simplified models and simple interpolation to gain a better feel for what is possible.

I do of course recognise that possibly the more realistic model presented and questions raised are too complex to answer, because there are too many unknowns. But if that is the case, I would perhaps suggest that if we simplify a scenario too far, the results of that over-simplification may well turn out to have little to no appication in a real world scenario.

On the contrary, I believe that simplified models get you 90% of the way there. It's getting that extra 10% that takes years of hard work by talented engineers. We're drifting into theology here, and I don't insist that anyone share my view. I did the above for my own curiosity. If someone else finds it interesting, well and good.

Kind regards,


Gerald.
Gerald,

I hope I have answered some of your questions above.

Best,

Matt
 
Last edited:

Shashin

Well-known member
I do of course recognize that possibly the more realistic model presented and questions raised are too complex to answer, because there are too many unknowns. But if that is the case, I would perhaps suggest that if we simplify a scenario too far, the results of that over-simplification may well turn out to have little to no application in a real world scenario.
Generally speaking, models are a case of diminishing returns. As models become more complex, the increase in accuracy falls off. Depending on the question, simple models may be sufficient. You only need Newton at the pool table, but you need Einstein to get you to Mars. But those that are good at pool can have figured out Newton intuitively. But the simpler model is never really invalidated, it just needs context.

The great thing with photography is seeing is believing. You could actually test this yourself from file to print and see what works. The advantage in knowing some of the underlying theory (which means a hypothesis supported by evidence) is that you can better evaluate this issue under consideration. For example, in the case of the OP, evaluating at 100% monitor view is essentially meaningless. Most "real world testing" I see is not very useful or simply biased and has very little value.

But I also understand the people that distrust theory over "real world testing" as the theory does not support their experience. The problem with theory, particularly theory presented in fora such as this, is that the significance is never addressed. For example, diffraction is more perceptible in high resolutions sensors, especially in comparisons of two images at 100% monitor view. But what does that mean for an actual viewer? I routinely use f/16 and printer to 40" or larger and diffraction does not impact the perception of detail or sharpness. It is certainly there, but in term of the image that is being perceived, it is insignificant.

I think Matt's analysis is very interesting. Like you and Matt, I understand the model may have limits. But it raises a good question, which you articulated, what is the limit to uprezing? It is complicated in a number of ways as you pointed out. There are other variables as well: contrast is a more important factor in the viewer perceiving detail than resolution: a lower resolution image with greater contrast will appear more detailed than a higher resolution one with less contrast (contrast and resolution tends to be mutually exclusive where an increase in one shows a decrease in the other). So uprezing need not improve detail to still make a better image, but simply hide digital artifacts.

There is a long list of things in photography that seem to negatively impact image quality, from optical aberrations to limits in color reproduction, yet we see great images all the time and from every point in history. Photography is ultimately a visual illusion. If it looks good, it actually does look good. Increases in technical qualities are only a small part of that equation.
 

gerald.d

Well-known member
Gerald,

I hope I have answered some of your questions above.

Best,

Matt
Thanks Matt - very helpful, but honestly I do struggle somewhat with the point of modeling this so simplistically, because I don't accept that you get 90% of the way there when discarding so many essential elements.

Kind regards,


Gerald.
 

gerald.d

Well-known member
Generally speaking, models are a case of diminishing returns. As models become more complex, the increase in accuracy falls off. Depending on the question, simple models may be sufficient. You only need Newton at the pool table, but you need Einstein to get you to Mars. But those that are good at pool can have figured out Newton intuitively. But the simpler model is never really invalidated, it just needs context.
I have a little experience in this field - my degree was in Physics (although I will readily admit to having forgotten 95% of what I was taught), and in the real world a decade of modeling and predicting retail sales figures, and another decade of financial modeling for businesses ranging in size from $10M to >$100B turnover.

Funnily enough, it was when modeling one of the financial processes of a $50B turnover company for forecast purposes that I discovered a massive error in the model that had been implemented to reallocate close to $20B in actual costs. The reason was that people who wrote the "simple" model used to allocate actual costs had chosen not to include a level of complexity in their model that had been specified by the end users, because they believed it wouldn't be material. Well, it was material. To the tune of around $3B in costs being allocated incorrectly to the receiving department heads.

So no, as models become more complex, the increase in accuracy doesn't necessarily fall off. The reasons why many models fail has nothing to do with their complexity per se, it's because they don't model the right complexity correctly. There are of course - in some instances, not all - levels of complexity that are impossible to model, but we don't have to worry about those examples for the purposes of this discussion.

To take up your pool analogy, you could take the world's most accurate computer simulation of pool, and a correctly programmed computer playing that simulation will always come out better than a human being over time, simply because the computer is capable of modeling the reality of the simulation (and I'll get onto the subject of realities of simulations later) perfectly, whereas the human would never be able to.

And no, you don't need general relativity to play or even model a game of pool - but that is because you can prove that building it into your model won't make the slightest bit of difference to the potential for your model to accurately represent reality.

The simple model presented isn't being invalidated as a model - it models "its reality" perfectly - but it does not get anywhere close to correctly modeling the process by which the files under discussion are created, and therefore I would suggest it should be discarded as irrelevant. All the model actually shows (and apologies if I am recalling the terminology incorrectly here) is that a first order linear interpolation quite closely matches a third order cubic interpolation for a step change.

To model something well, you need to invest a lot of time and effort into understanding exactly what aspects of the reality in question should be modeled, and with how much effort/precision.

The great thing with photography is seeing is believing. You could actually test this yourself from file to print and see what works. The advantage in knowing some of the underlying theory (which means a hypothesis supported by evidence) is that you can better evaluate this issue under consideration. For example, in the case of the OP, evaluating at 100% monitor view is essentially meaningless. Most "real world testing" I see is not very useful or simply biased and has very little value.
That's because most "real world" testing that you look at is irrelevant to your "real world" requirements, and your proposed test is not a valid one for the subject at hand.

But I also understand the people that distrust theory over "real world testing" as the theory does not support their experience. The problem with theory, particularly theory presented in fora such as this, is that the significance is never addressed. For example, diffraction is more perceptible in high resolutions sensors, especially in comparisons of two images at 100% monitor view. But what does that mean for an actual viewer? I routinely use f/16 and printer to 40" or larger and diffraction does not impact the perception of detail or sharpness. It is certainly there, but in term of the image that is being perceived, it is insignificant.
Then let's address that significance directly.

I would venture to suggest that 99.99% of photographers worldwide do not print. Not only that, I would also suggest that 99.999% of viewers don't look at 40" prints, and that of the 0.001% of viewers who do, those 40" prints represent less than 0.01% of the images they do look at.

That is not to denigrate your art in any way shape or form. What I am saying is that if, for example, someone was unable to distinguish between a 50MP native resolution file; a 200MP uprezzed example of that file; a 200MP file from a native 200MP sensor; and a 200MP file from a multi-shot 50MP sensor, on a 40 inch print, then all that goes to demonstrate is that viewing those files on a 40 inch print is not the correct way to assess the merits, or otherwise, of those files.

Your 40 inch printed "models" of the files are not of sufficient precision to represent the "reality" inherent in those files.

I completely accept that for you, diffraction at f/16 is irrelevant. For me, I can see the diffraction caused by going from f/5.6 to f/8.0 as plain as the difference between night and day, and in my "reality", such a difference is significant. I also readily accept that for 99.99% of the people who see my images, they wouldn't notice the difference themselves. But I don't shoot for them. The difference is there, and it can be readily demonstrated.

I think Matt's analysis is very interesting. Like you and Matt, I understand the model may have limits. But it raises a good question, which you articulated, what is the limit to uprezing? It is complicated in a number of ways as you pointed out. There are other variables as well: contrast is a more important factor in the viewer perceiving detail than resolution: a lower resolution image with greater contrast will appear more detailed than a higher resolution one with less contrast (contrast and resolution tends to be mutually exclusive where an increase in one shows a decrease in the other). So uprezing need not improve detail to still make a better image, but simply hide digital artifacts.
It is very interesting, and I am thankful for him sharing it because it encouraged me to think about this in some depth - something that I've not really done before. Clearly there is quite some way to go before any of us can genuinely understand what is going on in the creation of these files, but this is very much a technical discussion that has little to no relevance to how something looks on a large print.

There is a long list of things in photography that seem to negatively impact image quality, from optical aberrations to limits in color reproduction, yet we see great images all the time and from every point in history. Photography is ultimately a visual illusion. If it looks good, it actually does look good. Increases in technical qualities are only a small part of that equation.
Indeed.

Kind regards,


Gerald.
 

MGrayson

Subscriber and Workshop Member
I just did a bit of research into modern Bayer demosaicing algorithms. They are all a form of uprezzing each channel. The trick is making the particular uprezzing algorithms match up to prevent christmas light noise and colored halos around edges. This is hard, and a variety of more and less sophisticated algorithms try to enforce consistency of the interpolations. Without pixel shift, the different color channels are interpolated between different sets of points - the particular color pixels in the array. For example, Red values are known at the corners of a square made by four Red pixels. Inside that square, Red is an interpolated function. A Green pixel inside that square knows its Green value perfectly, but has to guess at the Red value by using this Red function. So every pixel knows one value perfectly and guesses at the other two.

That much is available online. Here are my conclusions:

WITH pixel shift, accurate values of all three channels are known at the same nodes (each pixel). This gives you the same resolution with no interpolations, and so "perfect" color. Why uprez from there? The measurements that overlap several filter colors provide linear equations that the colors must satisfy at points BETWEEN the pixels (If my shifted pixel covers quarters of a Red, a Blue, and two Greens, then I know that (R+B+2G)/4 = Measured Value.) So uprezzing using those between-pixel points is more accurate than simply uprezzing Foveon-like values at each pixel. In other words, it's worth doing, and requires less of the magic needed to make good Bayer conversions. Less, but not none.

Disclaimer: Actual pixel-shift conversion algorithms are available, but require downloading articles. And I like to guess. But I'm coming around to the "realness" of pixel-shifted high resolution. Mind you, the motion problem is still a killer.

--Matt

PS. There’s another wrinkle if you want to get further into the weeds. The measured value at a pixel is an average over the pixel. So when a shifted pixel covers parts of a Red and Blue, it’s not getting the same Red value that makes up the entire Red pixel measurement. I wonder if the real algorithms have to reconstruct the color functions Inside each pixel. I suppose they do, as the final resolution is four small pixels per original large one. In other words, each measurement is a linear constraint, e.g., R =(R11+R12+R21+R22)/4. Sounds fun, actually...
 
Last edited:

glenerrolrd

Workshop Member
When the S1R came out ,at the beginning of 2020, Diglloyd did extensive testing of the pixel shift technology . I don t know if you agree but I can see the difference between the native 47MP conversions and those utilizing pixel shift . The examples he provided are absolutely clear that pixel shift improves resolution, color fidelity ,tone separation and noise . I didn t need a 30x40 print to see the results .

The issue he confirmed was that the algorithms used by the manufacturers were unique . Pixel shift as implemented by Sony provided far different results than the Panasonic . The Panasonic used AI based algorithms to address small movement between captures . Even with the best techniques DL was unable to achieve satifactory results with the Sony approach . The S1R was found to be outstanding and highly recommended with the caution that it has a heavy tripod requirement .

While Leica may have benefitted from the Panasonic partnerships ..the way they implemented multi shot appears to be different ? There composite file is 187MP and the Panasonic is 192MP . I don t know this for a fact....and I have no insights into whether Panasonic allowed Leica access to their approach .

I do know that DL swears after 1000 s of tests that a larger file (MPs) downsized to match a smaller file ...(think 47mps down to 24mps ) is almost always superior ..its not about looking at 100% views of the pixels . (thats a test of pixel quality which is of course also important ).

What we can t determine by looking at the background theory is the effectiveness of the AI applications in the firmware . Take a brief look at the Topaz AI plugs ins for an example of AI application .
 

MGrayson

Subscriber and Workshop Member
I'm a fan of the Topaz plugins despite a general dislike of AI, or, more properly Machine Learning. It is effective, but for inscrutable reasons. Nevertheless, Gigapixel AI does an astonishingly good job with pixellated color art (my daughter's anime frame grab).

It's interesting that different implementations give different results under different circumstances. That means the technology has a long path of improvement ahead of it. We all win.

As for downsampling, much as I hate to agree with DL as a matter of principle, I approve wholeheartedly. I have never printed much over a meter on the long axis, and while I can tell the difference between 24 and 37.5 MP at that size, I can't see improvement thereafter. I have an 8,000 x 25,000 3 shot vertical pano of a tall, thin building in NYC. Printing it 40" high is over 600 dpi!

Matt

PS. I just heard something of how Leica manages movement, and it seems more robust than any other method I've heard of. On the other hand, it wants a REALLY stable tripod. A Gitzo 2 (old carbon model, FWIW) needs 10 seconds to stabilize. Forecast: Dual layer IBIS. One to keep the sensor stable and a second one to perform the shifts. Probably (years) too late to patent the idea.
 
Last edited:

iiiNelson

Well-known member
When the S1R came out ,at the beginning of 2020, Diglloyd did extensive testing of the pixel shift technology . I don t know if you agree but I can see the difference between the native 47MP conversions and those utilizing pixel shift . The examples he provided are absolutely clear that pixel shift improves resolution, color fidelity ,tone separation and noise . I didn t need a 30x40 print to see the results .

The issue he confirmed was that the algorithms used by the manufacturers were unique . Pixel shift as implemented by Sony provided far different results than the Panasonic . The Panasonic used AI based algorithms to address small movement between captures . Even with the best techniques DL was unable to achieve satifactory results with the Sony approach . The S1R was found to be outstanding and highly recommended with the caution that it has a heavy tripod requirement .

While Leica may have benefitted from the Panasonic partnerships ..the way they implemented multi shot appears to be different ? There composite file is 187MP and the Panasonic is 192MP . I don t know this for a fact....and I have no insights into whether Panasonic allowed Leica access to their approach .

I do know that DL swears after 1000 s of tests that a larger file (MPs) downsized to match a smaller file ...(think 47mps down to 24mps ) is almost always superior ..its not about looking at 100% views of the pixels . (thats a test of pixel quality which is of course also important ).

What we can t determine by looking at the background theory is the effectiveness of the AI applications in the firmware . Take a brief look at the Topaz AI plugs ins for an example of AI application .
They both produce the exact same 187 megapixel file.

As far as Sony they produce two different types of stitched files. One is similar to the Pentax in that they produce a file that captures all color data without increasing megapixel file size. Then there’s the newer stitch ability that captures up to 16 images that increases file size 4x’s. The newer and older version was implemented on the A7RIV... only the older one was on the A7RIII
 

SrMphoto

Well-known member
They both produce the exact same 187 megapixel file.
Tests published on LUF indicate that the Multishot images by SL2 are (slightly?) better than the one produced by S1R. I would not be surprised if Leica uses different code to assemble files.

As far as Sony they produce two different types of stitched files. One is similar to the Pentax in that they produce a file that captures all color data without increasing megapixel file size. Then there’s the newer stitch ability that captures up to 16 images that increases file size 4x’s. The newer and older version was implemented on the A7RIV... only the older one was on the A7RIII
In a7rIV, pixel-shift increases the resolution by 4x. The generated file size(s) are 16x as the files are saved on the SD card and transferred to the computer for assembly. I assume that one would keep those 16 files in case a better assembly program appears.
 

SrMphoto

Well-known member
I'm a fan of the Topaz plugins despite a general dislike of AI, or, more properly Machine Learning. It is effective, but for inscrutable reasons. Nevertheless, Gigapixel AI does an astonishingly good job with pixellated color art (my daughter's anime frame grab).

It's interesting that different implementations give different results under different circumstances. That means the technology has a long path of improvement ahead of it. We all win.

As for downsampling, much as I hate to agree with DL as a matter of principle, I approve wholeheartedly. I have never printed much over a meter on the long axis, and while I can tell the difference between 24 and 37.5 MP at that size, I can't see improvement thereafter. I have an 8,000 x 25,000 3 shot vertical pano of a tall, thin building in NYC. Printing it 40" high is over 600 dpi!

Matt

PS. I just heard something of how Leica manages movement, and it seems more robust than any other method I've heard of. On the other hand, it wants a REALLY stable tripod. A Gitzo 2 (old carbon model, FWIW) needs 10 seconds to stabilize. Forecast: Dual layer IBIS. One to keep the sensor stable and a second one to perform the shifts. Probably (years) too late to patent the idea.
The resolution is only one of the possible benefits of Multishot. It is probably the least important one but is certainly the most spectacular one :).

I had very good results with the Gitzo GT2532.

The biggest disadvantage of Multishot is slow processing in the post (PS, NIK).
 

MGrayson

Subscriber and Workshop Member
The resolution is only one of the possible benefits of Multishot. It is probably the least important one but is certainly the most spectacular one :).

I had very good results with the Gitzo GT2532.

The biggest disadvantage of Multishot is slow processing in the post (PS, NIK).
I haven't touched an SL2, but is there a toggle for "wait until the camera is still before shooting"?

--Matt, who should RTFM.... :facesmack:
 

SrMphoto

Well-known member
I haven't touched an SL2, but is there a toggle for "wait until the camera is still before shooting"?

--Matt, who should RTFM.... :facesmack:
A warning dialog appears in Multishot mode if the camera detects considerable movements, the goal being to remind you to keep the camera still. There is no "wait until the camera is still" mode.
 

glenerrolrd

Workshop Member
They both produce the exact same 187 megapixel file.

As far as Sony they produce two different types of stitched files. One is similar to the Pentax in that they produce a file that captures all color data without increasing megapixel file size. Then there’s the newer stitch ability that captures up to 16 images that increases file size 4x’s. The newer and older version was implemented on the A7RIV... only the older one was on the A7RIII
Thanks about the file sizes ...makes sense that the Leica approach would be based on the Panasonic sensor ..that both cameras share . Seems like a nice feature when you have a subject that will hold still for you . :ROTFL:
 

iiiNelson

Well-known member
Tests published on LUF indicate that the Multishot images by SL2 are (slightly?) better than the one produced by S1R. I would not be surprised if Leica uses different code to assemble files.
This likely has more to do with a different sensor stack (Leica has fewer glass elements and a slightly thinner stack over the sensor in SL2 when compared to the S1R) that would increase perceived detail at the pixel level. The SL2 shows an slight increase in detail with many lenses even without pixel shift according to Reid Reviews, though in reality one would need to pixel peep to see the differences. At regular viewing distances or without using a loupe, I would bet that it would be hard for people to tell them apart when unlabeled If using the same lenses on each camera. I would be extremely surprised if internally at a software level they were drastically different given how similar they are internally.

Seems that there is a lot of part sharing between L-Mount Alliance members (for instance the Sigma fp uses the Leica Q battery that’s made and manufactured by Panasonic), the microprocessor in the Sigma fp is the same one as the Panasonic S1 (but flashed at the factory level with Sigma firmware), and there are many unofficially confirmed rumors that Panasonic manufactured much of the internal circuitry for the Leica SL 601 back in 2014-15 timeframe... which is why the L-Mount Alliance forming should be no huge surprise. Leica was using Panasonic’s Technology (DFD) even back then. A huge part of what makes these cameras possible are the shared developmental costs between the alliance members in order to bring diversified products to the market.

There’s absolutely no shame in any of this (IMO anyway) in a shrinking camera market. If technology sharing is what needs to happen for some of my favorite camera companies to financially survive, then I’m all for it personally.
 

iiiNelson

Well-known member
Thanks about the file sizes ...makes sense that the Leica approach would be based on the Panasonic sensor ..that both cameras share . Seems like a nice feature when you have a subject that will hold still for you . :ROTFL:
No worries. I know some people may get touchy about comparing Leica L-mount to the partner versions (not saying you) but there is more in common internally between the cameras than not. No shame in it as all of them are absolutely amazing cameras. I chose to go with the Panasonic for a specific reason - they have the three way tilting screen. The SL2 has some improved video features like LOG profiles, 5K video, 10-bit color for video, DCI 4K, etc. that I’d like to have... and perhaps Panasonic will add in firmware down the line.

In any case, excellent cameras all around and if the SL2 (or even the SL Or FF Panasonic cameras) would’ve been around In 2012-13 I probably would’ve never moved from Leica to Sony in all honesty.
 

SrMphoto

Well-known member
This likely has more to do with a different sensor stack (Leica has fewer glass elements and a slightly thinner stack over the sensor in SL2 when compared to the S1R) that would increase perceived detail at the pixel level. The SL2 shows an slight increase in detail with many lenses even without pixel shift according to Reid Reviews, though in reality one would need to pixel peep to see the differences. At regular viewing distances or without using a loupe, I would bet that it would be hard for people to tell them apart when unlabeled If using the same lenses on each camera. I would be extremely surprised if internally at a software level they were drastically different given how similar they are internally.

Seems that there is a lot of part sharing between L-Mount Alliance members (for instance the Sigma fp uses the Leica Q battery that’s made and manufactured by Panasonic), the microprocessor in the Sigma fp is the same one as the Panasonic S1 (but flashed at the factory level with Sigma firmware), and there are many unofficially confirmed rumors that Panasonic manufactured much of the internal circuitry for the Leica SL 601 back in 2014-15 timeframe... which is why the L-Mount Alliance forming should be no huge surprise. Leica was using Panasonic’s Technology (DFD) even back then. A huge part of what makes these cameras possible are the shared developmental costs between the alliance members in order to bring diversified products to the market.

There’s absolutely no shame in any of this (IMO anyway) in a shrinking camera market. If technology sharing is what needs to happen for some of my favorite camera companies to financially survive, then I’m all for it personally.
This is the test that describes the difference in Multishot. It does not look like it is caused by a difference in sensor stack, but rather by a difference in software used to assemble it:

https://www.l-camera-forum.com/topic/310699-leica-sl2-firmware-20-187-mp-multishot-mode/?do=findComment&comment=3997115


I need to do my own tests with my S1R and SL2 before I accept such a significant difference in Multishot quality.
 

SrMphoto

Well-known member
No worries. I know some people may get touchy about comparing Leica L-mount to the partner versions (not saying you) but there is more in common internally between the cameras than not. No shame in it as all of them are absolutely amazing cameras. I chose to go with the Panasonic for a specific reason - they have the three way tilting screen. The SL2 has some improved video features like LOG profiles, 5K video, 10-bit color for video, DCI 4K, etc. that I’d like to have... and perhaps Panasonic will add in firmware down the line.

In any case, excellent cameras all around and if the SL2 (or even the SL Or FF Panasonic cameras) would’ve been around In 2012-13 I probably would’ve never moved from Leica to Sony in all honesty.
I hope you don't mean me by "touchy people" :).

I own both S1R and SL2. I admit that my preferred camera is SL2. But I would not say that SL2 is generally better.
In some areas, S1R is better and some areas it is SL2. No shame in admitting that :).

Luckily for us, they are different and can fulfill the needs of a wide range of users.
 

D&A

Well-known member
Hi Joe,

I hate to assess comparative detail on a image viewed at screen resolution (for obvious reasons) but when I examine the two images you posted, the upper one (the interpolated one that's uprezzed using Topaz Gigapixel AI)...appears to have significantly more detail all around (on individual leaves etc.), when compared to the bottom image that represents the multishot image. Maybe selective crops of each file might change what I see on my screen. In any case thanks ever so much for providing these samples.

Dave (D&A)
 
Last edited:
Top