The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Leica SL adds Multishot capability 187MP

glenerrolrd

Workshop Member
Ok Sounds very impressive ..espacially if you already have an SL2 ! But I just don t get it . Maybe we have different objectives ..when most people think of huge 187MP they jump to large prints (not requiring any upsizing ) . But my prints are fixed by my printer and how I would handle them ...I like 13x19 on a 17 x 22 paper . This is clearly within the range of most files produced with 47MP cameras .

When I see comparisons ..I want to see the same file dimension on both the native 47MP and the 187MP ...showing a 1:1 enlargement tells me only that the 187MP file can easily be printed big .

Will I see any improvements in resolution , tone separation or noise comparing a 47mp file printed to 13x19 and a 187mp file printed to the same size .
 
For static subjects the pixel shifted files are superior in every way. 4 of the captures are for RGB pixel data, so no bayer interpolation, then 4 are for increasing the resolution of the file. The system works well...but again, for static subjects. It's great for still life. However even landscape can cause problems because of moving elements. There is a mode that algorithmically cleans up the movement but it's imperfect.

I have the S1R, so SL2 minus the Leica branding. I use the pixel shift mode for scanning film. It creates an amazing file.
 

glenerrolrd

Workshop Member
Has anyone seen any information about how Leica implemented the pixel shift and specifically how it might be similar or different from the S1R ?
 

Shashin

Well-known member
There is very little advantage to pixel shift over Bayer interpolation. I ran tests on that and the difference in resolution was not really perceptible.

At your print size, there is no advantage to pixel shift.
 

MGrayson

Subscriber and Workshop Member
There is very little advantage to pixel shift over Bayer interpolation. I ran tests on that and the difference in resolution was not really perceptible.

At your print size, there is no advantage to pixel shift.
I thought the advantage was same number of pixels, but better color and no Moiré.

¯\_(ツ)_/¯

Matt
 

Shashin

Well-known member
I thought the advantage was same number of pixels, but better color and no Moiré.

¯\_(ツ)_/¯

Matt
Theoretically or visually? I ran a comparison of images and the difference was not perceptible. When training student in this technology, they could not perceive a difference in the images at 100% and simply did not use the feature.

I am not sure that moire is the issue, which is an effect of object/image and sensor spatial frequencies. I have not heard that the Bayer array contributes to that, but it might. But pixel shift will not eliminate moire.
 

MGrayson

Subscriber and Workshop Member
Theoretically or visually? I ran a comparison of images and the difference was not perceptible. When training student in this technology, they could not perceive a difference in the images at 100% and simply did not use the feature.

I am not sure that moire is the issue, which is an effect of object/image and sensor spatial frequencies. I have not heard that the Bayer array contributes to that, but it might. But pixel shift will not eliminate moire.
IF, and that's a big if, the pixel shift is to get each pixel covered by a different color in the filter array, then no Bayer de-mosaicing need be done, and the effect is a real 47MP Foveon. I thought that's what product photographers used their Hassy MultiShot backs for back in the day. You can still get spatial frequency aliasing, but you won't get color aliasing.

The headline number, as always, is MP, but it is truly not helpful there. Even theoretically, all you're doing is interpolating a 47MP image. You, me, Jim Kasson - we've all done the math. You can't get higher than Nyquist, no matter how much you jiggle the pixels. Come to think of it, it's like doing a rolling window regression. The serial correlation makes for a fabulous (and false) R^2.

M
 

iiiNelson

Well-known member
IF, and that's a big if, the pixel shift is to get each pixel covered by a different color in the filter array, then no Bayer de-mosaicing need be done, and the effect is a real 47MP Foveon. I thought that's what product photographers used their Hassy MultiShot backs for back in the day. You can still get spatial frequency aliasing, but you won't get color aliasing.

The headline number, as always, is MP, but it is truly not helpful there. Even theoretically, all you're doing is interpolating a 47MP image. You, me, Jim Kasson - we've all done the math. You can't get higher than Nyquist, no matter how much you jiggle the pixels. Come to think of it, it's like doing a rolling window regression. The serial correlation makes for a fabulous (and false) R^2.

M
From all ive read that’s the exact benefit of pixel shifting. I have never used it on my S1R but it’s there. It’s one of those features that I forget exists but maybe it’ll be useful one day to have.
 

iiiNelson

Well-known member
Has anyone seen any information about how Leica implemented the pixel shift and specifically how it might be similar or different from the S1R ?
Given the specs and the similarity between both cameras, I wouldn’t be surprised if the SL2 worked the exact same way as it does in the S1R.
 

glenerrolrd

Workshop Member
To make this discussion relevant we need to avoid discussing of theory or learning from other pixel shifting technologies . It hard enough to follow this.

The firmware utilized to process the pixel shifted captures (in camera ) has a major impact on the resulting file . If you go back to the diglloyd tests of the S1r he praises the Panasonic Implementation and compares it to the Sony approach . The difference seems to be AI that helps with small movements in subjects .

Having looked at the many tests out there now ..its obvious to me that pixel shift as implemented on the SL2 improves resolution and reduces noise . The image quality is clearly superior . The disadvantage is that you can t use it with any subject movement and every test recommends locking down the camera on a heavy tripod . You also have to work with massive capture files.... that can t be fun .
 

D&A

Well-known member
Joe and others...a few questions regarding both Leica's and other manufactures implementation of pixel shift in general.

1. Does pixel shift occur predominantly in one direction (ie: on one axis)? If so, is apparent blurring of a moving object (ie: amount observed) dependant if it's moving in a horizontal, vertical or random direction?

2. I assume (and maybe incorrectly), that subject movement seen when pixel shift is implemented, is camera to subject dependant. By this I mean if a subject that is at infinity and thus small (like distant trees blowing in the wind on a mountaintop in a landscape shot), would exhibit less movement/blur than trees at much closer range (and thus fill a larger percentage of the frame?

3. Can one assume the subject movement/blurring in a pixel shifted image is akin in some respects to shooting a moving subject shot in a too lengthy/slow shutter speed exposure (when pixel shift is not implemented)?

One (among other) reasons of asking is shooting of fireworks. Generally images of fireworks is done with the shutter open for a few seconds and most fall in the vertical direction and blurring by nature of this subject matter is generally not noticeable. I wonder if pixel shift in such circumstances would be a benefit or hindrance? I realize that theoretical consideration may greatly differ from what's actually experienced.

Dave (D&A)
 
Last edited:

Shashin

Well-known member
Joe and others...a few questions regarding both Leica's and other manufactures implementation of pixel shift in general.

1. Does pixel shift occur predominantly in one direction (ie: on one axis)? If so, is apparent blurring of a moving object (ie: amount observed) dependant if it's moving in a horizontal, vertical or random direction?
It is on two. For example, it is creating additional resolution by moving the sensor half a pixel left, then down, then right. When shifting for color, the sensor is shifted in the same pattern, but a full pixel distance so it can make a R, G, B exposure through the Bayer array.

2. I assume (and maybe incorrectly), that subject movement seen when pixel shift is implemented, is camera to subject dependant. By this I mean if a subject that is at infinity and thus small (like distant trees blowing in the wind on a mountaintop in a landscape shot), would exhibit less movement/blur than trees at much closer range (and thus fill a larger percentage of the frame?
Since there are multiple exposures as the pixels are shifted, movement can be captured if the image moves. And as you pointed out, this is relative movement, just like camera shake motion. It really depends on the degree of motion of the image during the exposure. The appearance artifacts can vary from this motion due to the shifting pixels, whether you are shifting for resolution or color, and the nature of the object and motion.

3. Can one assume the subject movement/blurring in a pixel shifted image is akin in some respects to shooting a moving subject shot in a too lengthy/slow shutter speed exposure (when pixel shift is not implemented)?
Perhaps. The frequency of short exposures with the actual motions can create unnatural looking artifacts that don't look like a simple blur. I have had artifacts that look like a pixel pattern with color artifacts.

One (among other) reasons of asking is shooting of fireworks. Generally images of fireworks is done with the shutter open for a few seconds and most fall in the vertical direction and blurring by nature of this subject matter is generally not noticeable. I wonder if pixel shift in such circumstances would be a benefit or hindrance? I realize that theoretical consideration may greatly differ from what's actually experienced.

Dave (D&A)
I am kind of a try it and see person. It is kind of hard to predict the results of pixel shift as there are so many variables contributing to the final image. Obviously, pixel shift has greater chances in introducing artifacts, but at the same time, they need to be apparent and result in a greater cost than benefit. In the case of fireworks, I have not know anyone to use pixel shift and so you might be pioneering that use (or others have tried it and the results are so poor they were not worth sharing).

So in conclusion, this is a definite maybe...whatever "this" is.
 

D&A

Well-known member
Thanks Will for your thoughtful and informed response. What I neglected to take into account when posing my questions, were possible artifacts created as opposed or in conjunction with motion blur created with pixel shift.

As an aside, what makes fireworks a bit different than most subjects when shooting with shutter speed of a few seconds, is that although the subject matter is moving, bursts at various times seems to create the illusion sharpness and not simply a blurry image. Possibly combined with this, the artifacts (esp color artifacts) created when pixel shift is employed, would possibly not even be noticed by the very nature of colorful fireworks.

I would love to test out these theories as suggested but alas I don't own a camera's with pixel shift technology (although a good excuse to obtain one). Yes renting is of course an option. Thanks.

Dave (D&A)
 

glenerrolrd

Workshop Member
Thanks Joe for taking the time to do this . I have two questions :

1. Is 100% a relevant comparison . For any given files size (screen or print ) the multi shot will have more pixels and should look sharper . Don t you have to match on output physical dimensions not pixels . Doing a 100% pixel view compares pixel quality but not overall file quality . Am I confused about this ?

2. I understood that the SL2 has multi shot option that is utilized to minimize subject movement . Did you use it ?

I took a few test shots from our deck, testing the SL2 multi-shot feature on trees with leaf movement. Here's the scene (using the standard resolution file):



I focused on the crape myrtle tree in the center of the frame. There was some leaf movement, especially in the trees just behind the crape myrtle.

Now a 100% crop from the standard rez file:



And a 100% crop from the multi-shot file:



I imported the photos into Lightroom, used Auto in the Develop module (making sure to use the same settings for both), then used Photoshop to produce the TIFFs and crops. No sharpening except the default LR sharpening.

My conclusion from this brief test is that multi-shot is usable for landscape scenes with some leaf movement. I could see artifacts in areas where there was moderate leaf movement. You'd have to pixel peep to see it, but it was there. Since the camera produces both a multi-shot file and a standard rez file with a single shutter click, you don't really have to choose between the two until you process the photos using a desktop display.

Joe
 

Shashin

Well-known member
Thanks Will for your thoughtful and informed response. What I neglected to take into account when posing my questions, were possible artifacts created as opposed or in conjunction with motion blur created with pixel shift.

As an aside, what makes fireworks a bit different than most subjects when shooting with shutter speed of a few seconds, is that although the subject matter is moving, bursts at various times seems to create the illusion sharpness and not simply a blurry image. Possibly combined with this, the artifacts (esp color artifacts) created when pixel shift is employed, would possibly not even be noticed by the very nature of colorful fireworks.

I would love to test out these theories as suggested but alas I don't own a camera's with pixel shift technology (although a good excuse to obtain one). Yes renting is of course an option. Thanks.

Dave (D&A)
Sorry, Dave. Didn't get the thing about motion blur. I would image that purposely blurring motion would not be a problem with pixel shift--the intermediate shifted images would have the same exposure time and so should blend. My experience of artifacts is more to do with short exposure times with object with relatively fast motion where the shifted images are being captured at a slower rate than the exposure time. The only thing I can think of that might be a problem with fireworks is if they contain sparkling type effects, but since those would be randomly dispersed, I imagine any artifacts would not be apparent--if that makes sense.
 

gerald.d

Well-known member
I've shot a lot of images with pixel shift on the S1R.

The biggest challenge in shooting is focusing - there is no way to tell before you see the final file whether or not you have nailed focus (I typically shoot in the macro ballpark with depth of field typically a millimeter or less - of course your mileage may vary here depending on what you're shooting).

My biggest concern with the outputs however is digital combing effects on areas with high contrast. I'll try to dig up a file and post here later, but it makes the files unusable for me.

Kind regards,


Gerald.
 

MGrayson

Subscriber and Workshop Member
Half way between theory and pictures are simplified models. Here's a simple light/dark boundary with and without pixel shift.

First, without:



On top is "reality". Then a depiction of the pixel wells, and finally, the light gathered by those wells.

Now we add shifted pixels.



So right out of the box, we see the pixel-shifted plot clearly gives a better sense of the location of the dark/light boundary.

But wait! What happens if we upsample the unshifted image? Here is upsampling using Mathematica's cubic interpolation of the unshifted output (in blue) against the pixel-shifted output (in orange). The blue curve doesn't require an electronic shutter or a motionless subject. Also bear in mind that we're viewing this at about 100x magnification. Our upsampled pixels here are 1/2 inch wide!



Actually, the difference is visible at only 100%, I'll spare you the computer generated images. But then, I'm using the lowest tech upsampling.

Matt
 
Last edited:

Shashin

Well-known member
So, let say you are using a 44" printer and the largest print you can make is about 60x40". Lets also say that photo quality means that half standard viewing distance would still give you an image where you cannot out-resolve the image (150 dpi in print terms. And if you have ever printed at 150 dpi, you know the image hold up visually, so it is a pretty conservative measure). At that print size with the unshifted image, you could have a viewing distance of about 20" and perceive the images as being high-quality. With the shifted image, that would be a viewing distance of 10". An uprezed image would also be 10". I don't know bout you, but a 10" viewing distance would require some kind of optical aid for me.

I regularly printed 30x40" exhibition prints from my 40MP files. I never need to uprez them and you could view them from any distance without the loss of photo quality. In fact, the printer/paper combination would not allow all the detail in the file to be rendered.

Pixel shift is not going to give you any meaningful resolution for printing. And if you need more, I would just uprez that image and no one will perceive the difference in a print.
 
Last edited:

SrMphoto

Well-known member
Multishot/PixelShift is not only about resolution. It has better DR, better colors, and less noise than a single shot.
Nonetheless, it is a joy to look at well executed 187Mp files on screen :).

But processing those images (1Gb TIFF) is PITA :(. Maybe it makes sense to reduce the size of 187Mp files before processing them (100Mp, 60Mp?).
 

SrMphoto

Well-known member
I've shot a lot of images with pixel shift on the S1R.

The biggest challenge in shooting is focusing - there is no way to tell before you see the final file whether or not you have nailed focus (I typically shoot in the macro ballpark with depth of field typically a millimeter or less - of course your mileage may vary here depending on what you're shooting).

My biggest concern with the outputs however is digital combing effects on areas with high contrast. I'll try to dig up a file and post here later, but it makes the files unusable for me.

Kind regards,


Gerald.
Some posters on LUF who own both S1R and SL2 claim that SL2 has better high-resolution results than S1R.
 
Top