The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Fuji Sub-um technology will bring pixel-shift and 400mpx on GFX100

mristuccia

Well-known member
This is much easier said before you have used the Automated in-camera Frame Averaging of the IQ4.

Once you've done it in-camera, with a single click, resulting in a "normal" raw file you can treat like any other (that just happens to be unbelievably clean in the shadows and allows long exposure without an ND filter), it's hard to view the capture-a-bunch-of-raws-convert-to-tiffs-manually-keep-track-of-which-ones-belong-to-which-stack-then-load-into-huge-photoshop-document-and-average method as "easily achievable".

It's like learning to do calculus or standard deviations by long-hand vs pushing a button and having the answer. Maybe some will enjoy the extra work. But for most people they just want the end result, and in-camera Automated Frame Averaging is just a way faster, easier, less distracting way of getting there.
Doug,

don't put this so bad. :p
I understand that the automated FA experience is night and day in respect to doing it manually. But there are people like me who doesn't bother at all, the pace is slow in any case when I use digital backs and technical cameras. And I have time and patience. A simple macro automation would do the job, and I'm a software developer, so I could automate the FA post further if I'd really need it. But I don't need it.

keep-track-of-which-ones-belongs-to-which-stack?
I just photograph my hand between each stack group. The rest is done by the image counter. Easy-peasy. :)

But what I wanted to say in my quoted post is that trading a thing that cannot be manually done (pixel-shift) with one that can be (FA) is not my way of thinking.
Of course both our biased mileage may vary...
 
Last edited:

mristuccia

Well-known member
And introduce new artifacts unique to pixel shifting :).

(Bias alert: My company (Digital Transitions) chooses to sell cameras (Phase One) that do not do pixel shifting and chooses not to sell cameras (e.g. Sinar, Hasselblad, Fuji) that do. So I'm obviously biased. But I also have quite a lot of experience working with clients to evaluate against these options. I've posted more of what I've learned from that experience here.
It could be, but it doesn't have to be. Let's wait and then test... :rolleyes:
 

dougpeterson

Workshop Member
I understand that the automated FA experience is night and day in respect to doing it manually. But there are people like me who doesn't bother at all, the pace is slow in any case when I use digital backs and technical cameras. And I have time and patience.
As I said, some people will enjoy calculating standard deviation long hand rather than using a calculator. Not a thing in the world wrong with that.

I personally enjoy doing dishes by hand more than using a dishwasher.

Different strokes...
 

Audii-Dudii

Active member
This is much easier said before you have used the Automated in-camera Frame Averaging of the IQ4.

Once you've done it in-camera, with a single click, resulting in a "normal" raw file you can treat like any other (that just happens to be unbelievably clean in the shadows and allows long exposure without an ND filter), it's hard to view the capture-a-bunch-of-raws-convert-to-tiffs-manually-keep-track-of-which-ones-belong-to-which-stack-then-load-into-huge-photoshop-document-and-average method as "easily achievable".
In my experience, using Photoshop's "Load files into Stack" utility makes it quite easy to load files for further processing. And a big advantage of going this route instead of averaging frames in-camera is that one isn't limited solely to averaging the frames, but can also process them in several other ways, some of which -- such as median blending -- can produce even better results for some purposes than just simple frame averaging. And because the stacked files are a Smart Object, it's easy to experiment with various blending modes to see if better results can be achieved using a blending method other than averaging, even if one ultimately decides that averaging is, in fact, the best choice.

It's like learning to do calculus or standard deviations by long-hand vs pushing a button and having the answer. Maybe some will enjoy the extra work. But for most people they just want the end result, and in-camera Automated Frame Averaging is just a way faster, easier, less distracting way of getting there.
Another point in favor of using the long-hand method is that it costs nothing but time, as there's no need to buy a fancy calculator to do the extra work for you. Now, for some photographers, time is money -- I get that -- but for many other photographers -- *raises hand!* -- money is more valuable than time, so the long-hand method is the only option realistically available to them. And for yet other photographers -- *raises hand again!* -- the quality of results matters more than the time or money required to achieve them, hence the flexibility of being able to blend files using different modes during post-processing is essential.

<rant>Phase One is certainly to be applauded for offering this feature, but personally, I find the zeal with which it is being promoted by some (and not necessarily you, Doug, because you do acknowledge similar results can be achieved via other methods) bothers me somewhat, because many are implying this is a breakthrough of sorts by Phase One, whereas in reality, all they did was bring in-camera a processing technique that has been commonly used in another field of photography for decades and limit its flexibility in the process. In-camera frame averaging is convenient, sure; but it's hardly the breakthrough some are trying sell it as... </rant>
 

mristuccia

Well-known member
As I said, some people will enjoy calculating standard deviation long hand rather than using a calculator. Not a thing in the world wrong with that.

I personally enjoy doing dishes by hand more than using a dishwasher.

Different strokes...
It is more like calculating the standard deviation on a programmable pocket calculator in the field versus doing it on a desktop PC at home. No paper work in both cases. ,)

But I get your point, and by the way I like to hand wash dishes as well. :)
 
Last edited:

MGrayson

Subscriber and Workshop Member
A camera tethered to a laptop would be able to do FA in real time with fairly simple programming. You’d want software that could average RAW pixel data and not apply demosaicing until done. Dcraw?

Matt
 

mristuccia

Well-known member
A camera tethered to a laptop would be able to do FA in real time with fairly simple programming. You’d want software that could average RAW pixel data and not apply demosaicing until done. Dcraw?

Matt
I think that LibRaw and a little bit of coding could easily perform FA directly on the RAW data.

Not sure whether the average is a linear operator, but I assume we could also do something like this:

Code:
load first image and make it the current image
while (there is a next image)
{
    load next image
    average with current image and make the result as current image
}
save the current image.
By doing this no more than 2 images must be kept in memory, and I suspect that this is more or less the method used by P1 inside their 4150 back.
 

MGrayson

Subscriber and Workshop Member
I think that LibRaw and a little bit of coding could easily perform FA directly on the RAW data.

Not sure whether the average is a linear operator, but I assume we could also do something like this:

Code:
load first image and make it the current image
while (there is a next image)
{
    load next image
    average with current image and make the result as current image
}
save the current image.
By doing this no more than 2 images must be kept in memory, and I suspect that this is more or less the method used by P1 inside their 4150 back.
The correct formula has to keep a count of how many images are already absorbed. The k-th new_average = 1/k * new image + (k-1)/k * old_average

Still simple enough.
 

mristuccia

Well-known member
The correct formula has to keep a count of how many images are already absorbed. The k-th new_average = 1/k * new image + (k-1)/k * old_average

Still simple enough.
Yes, you're right| But I fear that 1/k could be too small if the number of images is big, we could loose precision by applying your "right" formula. Don't remember whether a pixel in the RAW file is represented by a 8 or 16 bit value. I suppose the second one. However I'm curious. When I find a little bit of time I will try and let you now.
 

Greg Haag

Well-known member
Gerald,
Sorry for the delay on the combing issue. This is not how I shoot, so I pieced together as best I could something to test and see about the combing on high contrast w text. This is shot with the S1R hi-res mode, the canon adaptor w canon 100mm macro, I have included a bts shot for clarity. I shot it in both mode 1 and mode 2, in this example, I felt mode 2 performed better. I am not sure if this actually addresses what you were referring to or not, if not, let me know and I can try again.
Thanks,
Greg


S1R w Canon 100macro-1001980.jpgS1R w Canon 100macro bts-1358.jpg
Greg - are you sure your S1R does a good job?

I get bad "combing" (not sure what else to call it) anywhere where there is high contrast - for example, black text on a white background.

I've actually given up using the pixel-shift on the S1R because of it.

Kind regards,


Gerald.
 

Abstraction

Well-known member
I find that I don't need to do a standard deviation calculations. I can usually just take one look at a guy and tell you right away the amount of standard deviation. Sometimes, it's substandard deviation.
 

MGrayson

Subscriber and Workshop Member
Yes, you're right| But I fear that 1/k could be too small if the number of images is big, we could loose precision by applying your "right" formula. Don't remember whether a pixel in the RAW file is represented by a 8 or 16 bit value. I suppose the second one. However I'm curious. When I find a little bit of time I will try and let you now.
I suppose from a numerical analysis point of view, the frames should all just be added, and then the sum divided by N at the end. That prevents a lot of multiplication by 995/996, etc. Some buffer has to hold floating point values. I wonder how much of a camera's chip handles floating point arrays. The output is integers, and the DA converters provide integers. You'd really want either floating point, or much larger integers than the bit depth of a pixel normally produces.

Matt
 

mristuccia

Well-known member
I suppose from a numerical analysis point of view, the frames should all just be added, and then the sum divided by N at the end. That prevents a lot of multiplication by 995/996, etc. Some buffer has to hold floating point values. I wonder how much of a camera's chip handles floating point arrays. The output is integers, and the DA converters provide integers. You'd really want either floating point, or much larger integers than the bit depth of a pixel normally produces.

Matt
That is a good point.
Yes, everything is integer inside the camera.
Assuming that a pixel is represented by an unsigned 16 bit value, if we accumulate into an unsigned 32 bit variable we can store up to 65536 images before risking an overflow. This should be plenty enough. :rolleyes:
At the end we divide by N.
 

ErikKaffehr

Well-known member
Some reflections....

Hi,

We can pretty much see the potential benefits of multishot looking at the Sony A7rIV, that has a sensor design which is probably very close to the GFX 100. Same generation Sony BSI with the same pixel size.

DPReviews has some comparable test shots.
Moire.jpg
These images are not sharpened, as far as I recall, but shows the Multishot reduces color moiré.

CDetail2.jpg
Looking at real world detail, the benefits may be small.

Capture.jpg
Looking at the USAF targets in the image may indicate the 16 image multishot may resolve more detail.

But, folks more knowledgeable than me essentially say (mostly) that is not the case (mostly).

Anyway, I calculated some MTF data from the two samples:
MS.jpg

The top chart is MTF and the bottom is edge profile. My take is (or may be)
  • The MTF curves are pretty close.
  • There is a small advantage to the 16X multishot image in resolution at 20% MTF
  • The single shot image has significant MTF at Nyquist, would cause aliasing.
  • The resolution advantage at MTF 20% of the 16X multishot is around 16%, that may correspond the simplified demosaicing algorithm, as we have RGBG values for each pixel.

To find out more may need some studies at the pixel level.

Using 16 exposures means that we have 16x more data and that would double signal noise ratio.

Best regards
Erik
 

mristuccia

Well-known member
Just reading some tests made by Jim Kasson about this topic, especially the following two ones:

https://blog.kasson.com/a7riii/sony-a7riii-pixel-shift-real-world-false-colors-and-dynamic-range/

https://blog.kasson.com/a7riv/pixel-shift-in-the-sony-a7riv/

By going through the second one I happened to read about the capacity of the LR's "Enhance Details" of removing color artefacts, an effect similar to what we could obtain by means of pixel shift.

I was curious and tried on a sample image.

Here is the full test image:



And here is a comparison of a 100% detail crop without (left) and with LR enhanced details (right):



Look at the fence and at the water. Less color artefacts. That's quite interesting...
Unfortunately this is an Hasselblad image (Cambo WDS, CFV-50c and Planar 2.8/80, 100 ISO, 3mm lens raise) and I can't compare results with Capture One...
 

Shashin

Well-known member
I am not sure a 100% monitor view is actually significant. I would make two large prints and hang them side by side and see if there is a real perceptual difference. I suspect not.
 

dougpeterson

Workshop Member
Just reading some tests made by Jim Kasson about this topic, especially the following two ones:

https://blog.kasson.com/a7riii/sony-a7riii-pixel-shift-real-world-false-colors-and-dynamic-range/

https://blog.kasson.com/a7riv/pixel-shift-in-the-sony-a7riv/

By going through the second one I happened to read about the capacity of the LR's "Enhance Details" of removing color artefacts, an effect similar to what we could obtain by means of pixel shift.
[...]
Look at the fence and at the water. Less color artefacts. That's quite interesting...
Unfortunately this is an Hasselblad image (Cambo WDS, CFV-50c and Planar 2.8/80, 100 ISO, 3mm lens raise) and I can't compare results with Capture One...
I was very excited by "Enhance Detail" when it first came out.

But my testing showed it basically brought LR on par-ish with Capture One's native raw processing, but with the added cost of workflow speed, workflow complexity, larger storage requirements, and the requirement to use LightRoom.

Really much better (in my highly biased opinion) to just use Capture One.
 

dougpeterson

Workshop Member
I am not sure a 100% monitor view is actually significant. I would make two large prints and hang them side by side and see if there is a real perceptual difference. I suspect not.
The definitely is. Color noise artifacts (aliasing) shows up to my eye very quickly. Especially from LR which is very prone to it.
 

mristuccia

Well-known member
I was very excited by "Enhance Detail" when it first came out.

But my testing showed it basically brought LR on par-ish with Capture One's native raw processing, but with the added cost of workflow speed, workflow complexity, larger storage requirements, and the requirement to use LightRoom.

Really much better (in my highly biased opinion) to just use Capture One.
Hi Doug,

hope you don't mind if I just point out that out there there are even people who use Hasselblad and unfortunately cannot benefit of that marvellous C1 magic... ;)
 

mristuccia

Well-known member
I was very excited by "Enhance Detail" when it first came out.

But my testing showed it basically brought LR on par-ish with Capture One's native raw processing, but with the added cost of workflow speed, workflow complexity, larger storage requirements, and the requirement to use LightRoom.

Really much better (in my highly biased opinion) to just use Capture One.
By the way, tried with Iridient RAW Developer and with RawTherapee and the color artefacs on the fence are still there.
Would be curious to try out with C1, unfortunately I can't.
 
Top