The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

IQ4150 Frame Averaging Comparisons

Craig Stocks

Well-known member
There are a number of approaches for astrophotographers.

For deep sky images from a telescope most use either Deep Sky Stacker or PixInsight. Both can process RAW files along with dark, flat and bias frames. They can also automatically align the frames as there is frequently some drift over time, even with a good tracking mount.

For widefield astro-landscape photos Starry Landscape Stacker (Mac only) or Sequitor (Windows only) are common. Again they can read RAW files, align based on star patterns and handle additional files to help manage noise and vignetting. They can also separate the sky portion from the foreground and process it separately (the stars move and need to be aligned, the foreground doesn’t move).

The various dedicated astro-stacking tools offer the advantage of stacking RAW files rather than demosiaced photos. They generally offer quite a few processing and averaging options that I’m not even smart enough to list, much less describe.

And of course you can use Photoshop but it only works on files after they’re processed from RAW. The old school way is by adjusting layer opacity once the frames are loaded into the layer stack. The bottom layer is at 100%, layer 2 is at 50%, three at 33%, 4 at 25% and so on. The result is the average of all layers. The modern approach is lo load the layers into a smart object and then choose a smart object stacking mode of Mean or Median. Mathematically the Mean stack mode is identical to adjusting layer opacity.

For noise reduction I’ve found comparable results frame averaging in camera versus processing in C1 and then stacking “finished” frames in PS. Stacking in PS is more hassle but does offer some flexibility advantages. If you’re using averaging in lieu of ND filters then doing it in camera can provide the advantage of continuous capture whereas individual frames will always have a time gap.

For star trails I simply load the frames into the layer stack and change all of their blending modes to Lighten, no smart object needed.
 

Shashin

Well-known member
Frame averaging is very common in scientific imaging. I ran a number of microscopes that could use that. Most commonly you are using it when exposures were really marginal, usually a variation of fluorescence, either epi-fluorescence or confocal. It was one method of separating detail from noise particularly with low-resolution chips. As Craig notes, there are really great reason to use it for astrophotography and I have seen stunning results. For general photography with relatively high light levels and high resolution sensors, the benefits for prints is less clear--folks are posting some extreme examples at 100% in this thread just to see it. Having printed a lot of images on a 42" printer and knowing how viewers parse images, I am not sure how much return the technique will have practically speaking. Still, if you have it, use it. There may also be creative applications...
 

dougpeterson

Workshop Member
Movement of any kind is totally blurred in P1 Frame Averagine, even a slight breeze can and will disrupt the shot, and the effect is not the same as with a ND over the same time frame. The blur is most times chopped, even with the frame averaging set to no gapping.
Frame Averaging in P1 can't handle any subject movement at least from what I have seen so far. You need 100% no wind.
Frame averaging is a form of long exposure. It handles motion the same way as a long exposure if:
- you use a shutter speed that results in gapless capture
- you use a shutter speed that does not result in gapless capture but you use enough frames that the gaps average out

It's expected that, in order to get to (or near enough to) gapless capture, you might need a 2-3 stop ND filter or a polarizer filter. If you use no filters it's likely you'll need a couple dozen frames at minimum to achieve that gapless (non staccato) look.

In your case, since you dislike the aesthetic of subject movement in the breeze (this is not a judgement on my end; just clarifying your preference for others) it would only be useful in highly static scenes (e.g. desert rock formations). For others who were already purposefully doing long exposures (and in some cases going to some lengths to get such long exposures, such as testing multiple brands of 20 stop ND filters and gaffe taping their bodies) this represents a huge improvement in that workflow.
 

dougpeterson

Workshop Member
For general photography with relatively high light levels and high resolution sensors, the benefits for prints is less clear--folks are posting some extreme examples at 100% in this thread just to see it. Having printed a lot of images on a 42" printer and knowing how viewers parse images, I am not sure how much return the technique will have practically speaking. Still, if you have it, use it. There may also be creative applications...
The single-shot raw fidelity of a Phase One IQ4 150mp was already best-in-class, so it stands to reason that the decreased noise level from frame averaging will definitely be most noticeable on very large prints, very high res digital use (e.g. 4k monitor with a pinch-able zoom) or for heavy crops, and even then only in scenes with high/challenging dynamic range.

Of course I just described a good chunk of the use cases for our typical Phase One user :).
 

Landscapelover

Senior Subscriber Member
Extreme long exposure with ND filter usually is fine with windy condition. If this Frame Averaging only works with "no windy condition", it's not useful for landscape photography.
Just a short question, as a plain Phase One user. I really need straight answers.
Can I replace 10-15-stop ND with this Frame Averaging for landscape photography?
Is it real or just hype?
I've paid almost $50, 000 for this camera. Sony A7R IV costs ~ $3, 500. Fuji GFX 100 costs $10, 000. The MINT Fuji GF 32-64 + 110-200 + 23mm cost only $4, 000.
I am irritated and confused about this issue now.
I expect the best for the IQ4 150. Not over-promised and under-delivery.
Thank you for your help. These questions are for anyone.

Best regards,

Pramote
 

dougpeterson

Workshop Member
There are a number of approaches for astrophotographers.

For deep sky images from a telescope most use either Deep Sky Stacker or PixInsight. Both can process RAW files along with dark, flat and bias frames. They can also automatically align the frames as there is frequently some drift over time, even with a good tracking mount.

For widefield astro-landscape photos Starry Landscape Stacker (Mac only) or Sequitor (Windows only) are common. Again they can read RAW files, align based on star patterns and handle additional files to help manage noise and vignetting. They can also separate the sky portion from the foreground and process it separately (the stars move and need to be aligned, the foreground doesn’t move).

The various dedicated astro-stacking tools offer the advantage of stacking RAW files rather than demosiaced photos. They generally offer quite a few processing and averaging options that I’m not even smart enough to list, much less describe.

And of course you can use Photoshop but it only works on files after they’re processed from RAW. The old school way is by adjusting layer opacity once the frames are loaded into the layer stack. The bottom layer is at 100%, layer 2 is at 50%, three at 33%, 4 at 25% and so on. The result is the average of all layers. The modern approach is lo load the layers into a smart object and then choose a smart object stacking mode of Mean or Median. Mathematically the Mean stack mode is identical to adjusting layer opacity.

For noise reduction I’ve found comparable results frame averaging in camera versus processing in C1 and then stacking “finished” frames in PS. Stacking in PS is more hassle but does offer some flexibility advantages. If you’re using averaging in lieu of ND filters then doing it in camera can provide the advantage of continuous capture whereas individual frames will always have a time gap.

For star trails I simply load the frames into the layer stack and change all of their blending modes to Lighten, no smart object needed.
A great summary of the pros/cons/limitations!
 

Paul2660

Well-known member
Extreme long exposure with ND filter usually is fine with windy condition. If this Frame Averaging only works with "no windy condition", it's not useful for landscape photography.
Just a short question, as a plain Phase One user. I really need straight answers.
Can I replace 10-15-stop ND with this Frame Averaging for landscape photography?
Is it real or just hype?
I've paid almost $50, 000 for this camera. Sony A7R IV costs ~ $3, 500. Fuji GFX 100 costs $10, 000. The MINT Fuji GF 32-64 + 110-200 + 23mm cost only $4, 000.
I am irritated and confused about this issue now.
I expect the best for the IQ4 150. Not over-promised and under-delivery.
Thank you for your help. These questions are for anyone.

Best regards,

Pramote
I may be confused, but with a ND and long exposure, wind movement will will the same issue? as you are attempting a single long exposure say 2 minutes and anything moving will be a blur, especially leaves, and branches of trees. In a city scape, car lights will be streamed, etc. Clouds will be turned into a flow of movement, again very dependent on the wind/cloud movement/time etc.

You should be able to get a very similar effect with Frame Averaging. The key as has been pointed out is as close to gapless as possible. And to figure that out, the only way I know is to pick a total time of exposure and then play with the other settings, shutter speed/ISO/single exposure time and see if you can get to gapless. Right now the single longest exposure is 2.0 seconds and shortest is .25 (1/4) of sec (have to have to camera set to 14 bit, if in 16 bit shortest is 0.9.

From my images, the difference in noise/gain in details is pretty significant, and is better than a single long exposure even at base ISO with a ND 10x etc. As Doug pointed out, the key is try to get gapless as possible so you don't get the staccato look to leaves, (which would not happen with a single long exposure). Shadow details are improved significantly enough that it makes sense (at least to me) to always fire off a few frames.

The only thing I have noticed over and over is the frame averaged images can take on a creamy look to themselves, very smooth in areas where you might still want texture. This doesn't always happen as bad as other times and it's very easy to get the grain back in C1.

There are many places outside of my state where I would start with Frame Averaging immediately, due to the overall static nature of the subject. It will be interesting to see what FA does for a sunrise/sunset, say a 2 minute exposure or 5 minute with movement of sun.

If any type of an alignment process/step can be implemented in the future, P1 will have created the ultimate camera solution at least for my type of landscape work. Doubt that can be done as all the combinations are being done in the back, and this would require some pretty heavy lifting, but after all the back is running Linux, so maybe the next generation 200MP back will have this.

Paul C
 

dougpeterson

Workshop Member
The only thing I have noticed over and over is the frame averaged images can take on a creamy look to themselves, very smooth in areas where you might still want texture. This doesn't always happen as bad as other times and it's very easy to get the grain back in C1.
The extremely smooth very-very-very low noise look of frame averaging really threw me at first. Until you really look very carefully at subject matter to compare what actual real-world-detail is recorded it can almost make the image look soft or out of focus. This isn't that unexpected; grain/noise naturally makes an image look sharper, so reducing grain/noise can make an image look less sharp.

I wouldn't be surprised if most people find they sharpen frame-averaged raws a bit more than single-capture raws, not because they contain less detail, but because they contain (virtually) no noise that would otherwise enhance the sense of detail.
 

dougpeterson

Workshop Member
Extreme long exposure with ND filter usually is fine with windy condition. If this Frame Averaging only works with "no windy condition", it's not useful for landscape photography.
Just a short question, as a plain Phase One user. I really need straight answers.
Can I replace 10-15-stop ND with this Frame Averaging for landscape photography?
Is it real or just hype?
Of course the best way to answer this if for you to do your own testing. But for most people I think the answer is yes, "this will replace your 10-15-stop ND with this Frame Averaging for landscape photography" at least a lot of the time. You may still want to have a 2-3 stop filter or polarizer in some cases to help get the shutter speed longer and either gapless or close to gapless.

Note that Paul's comments are based on and specific to his desire to have imagery that has no motion blur. Either high-ND-filter long exposure or frame-averaging long exposures will render motion with blur.

If you have something that is moving (e.g. swaying tree branches) and want to render it sharply neither high-ND-filter, nor frame-averaging is an acceptable approach, at least by themselves. In some cases a single fast exposure could be blended in Photoshop with a longer exposure (whether via high-ND-filter or frame-averaging).
 

Craig Stocks

Well-known member
I really hope Phase One can extend the slowest shutter speed. Right now if you want gapless capture you have to live in a very narrow range from around 1/2 second to less than 2 seconds (2 seconds is the max but tends to go into an infinite loop so it isn't really usable). Sensor read speeds probably won't allow faster shutter speeds but I don't understand why it needs to be limited to 2 seconds. Sunset, blue hour and nightscapes frequently go well beyond 2 second shutter speeds and would be common use cases for frame averaging.
 

dougpeterson

Workshop Member
I really hope Phase One can extend the slowest shutter speed. Right now if you want gapless capture you have to live in a very narrow range from around 1/2 second to less than 2 seconds (2 seconds is the max but tends to go into an infinite loop so it isn't really usable). Sensor read speeds probably won't allow faster shutter speeds but I don't understand why it needs to be limited to 2 seconds. Sunset, blue hour and nightscapes frequently go well beyond 2 second shutter speeds and would be common use cases for frame averaging.
Think about a scene that needs 8 seconds at ISO 50 and you can reasonably spend 2 minutes capturing (reasonably meaning the stability you think your tripod can provide, or before you get bored, or before the cost-benefit of moving on to other scenes kicks in, or before the light changes, or before car headlights are likely to come into the scene etc).

Compare the following:
A) 15 frames at 8 seconds each at ISO 50 (not possible with current firmware)
B) 60 frames at 2 seconds each at ISO 200 (is possible with current firmware)

Each frame of (A) will be lower in noise due to the lower ISO but there are more frames of (B) to average.

I suspect P1 R+D tested both and found the end result of (B) ends up either ahead or roughly the same. The math would say (A) has a one stop advantage, but there may be other factors like sensor temperature and sensor-long-exposure behavior and darkframe behavior that offset that advantage.

That said, if for no other reason than simplicity (i.e. not having to change exposure settings between single captures and frame averaging), I also hope this range is extended. You're right that I don't think it can be made shorter (for gapless captures) due to sensor-hardware constraints.
 

Craig Stocks

Well-known member
Think about a scene that needs 8 seconds at ISO 50 and you can reasonably spend 2 minutes capturing (reasonably meaning the stability you think your tripod can provide, or before you get bored, or before the cost-benefit of moving on to other scenes kicks in, or before the light changes, or before car headlights are likely to come into the scene etc).

Compare the following:
A) 15 frames at 8 seconds each at ISO 50 (not possible with current firmware)
B) 60 frames at 2 seconds each at ISO 200 (is possible with current firmware)

Each frame of (A) will be lower in noise due to the lower ISO but there are more frames of (B) to average.

I suspect P1 R+D tested both and found the end result of (B) ends up either ahead or roughly the same. The math would say (A) has a one stop advantage, but there may be other factors like sensor temperature and sensor-long-exposure behavior and darkframe behavior that offset that advantage.

That said, if for no other reason than simplicity (i.e. not having to change exposure settings between single captures and frame averaging), I also hope this range is extended. You're right that I don't think it can be made shorter (for gapless captures) due to sensor-hardware constraints.
I tried it with my test scene. Of course, with current firmware and the 2 second limit I had to increase ISO to keep the shutter speed below 2 seconds (since the camera goes into an infinite loop at 2 seconds). I shot:

8 seconds, ISO 50 single frame

1.6 seconds, ISO 250 single frame

1.6 seconds, ISO 250 average from 8 frames

1.6 seconds, ISO 250 average from 20 frames

For normal processing all but the single ISO 250 frame were usable. When pushed aggressively to highlight shadow noise the average of 8 was a little worse than the single ISO 50 frame and the average of 20 frames was a little cleaner.

In terms of camera workflow though I'd still much rather keep the same 8 second shutter speed and simply activate the FA tool. Otherwise there is a short dance as you adjust ISO up and shutter speed down to get below 2 seconds. That also makes it rather impractical to go back and forth between single frames and FA.
 

RLB

Member
Think about a scene that needs 8 seconds at ISO 50 and you can reasonably spend 2 minutes capturing (reasonably meaning the stability you think your tripod can provide, or before you get bored, or before the cost-benefit of moving on to other scenes kicks in, or before the light changes, or before car headlights are likely to come into the scene etc).

Compare the following:
A) 15 frames at 8 seconds each at ISO 50 (not possible with current firmware)
B) 60 frames at 2 seconds each at ISO 200 (is possible with current firmware)

Each frame of (A) will be lower in noise due to the lower ISO but there are more frames of (B) to average.

I suspect P1 R+D tested both and found the end result of (B) ends up either ahead or roughly the same. The math would say (A) has a one stop advantage, but there may be other factors like sensor temperature and sensor-long-exposure behavior and darkframe behavior that offset that advantage.

That said, if for no other reason than simplicity (i.e. not having to change exposure settings between single captures and frame averaging), I also hope this range is extended. You're right that I don't think it can be made shorter (for gapless captures) due to sensor-hardware constraints.

Seems like another scenario where the lowest ISO may not achieve the best result, more rethinking of how we would have previously approached the scenario using single capture mode. With FA, ISO is (to an extent) less of a noise-quality issue as the FA process itself eliminates noise. So much in the same way everyone is concerned that you can't set FA to 16bit only, this is no longer the same process once you enter the FA Twilight Zone realm there are a different set of rules. This is going to take some unlearning and relearning on photog's part. IF P1 could do an entire tech piece on this it would be most helpful for the IQ4 owners rather than have us all guess our way through it.

R
 

dougpeterson

Workshop Member
Seems like another scenario where the lowest ISO may not achieve the best result, more rethinking of how we would have previously approached the scenario using single capture mode. With FA, ISO is (to an extent) less of a noise-quality issue as the FA process itself eliminates noise. So much in the same way everyone is concerned that you can't set FA to 16bit only, this is no longer the same process once you enter the FA Twilight Zone realm there are a different set of rules. This is going to take some unlearning and relearning on photog's part. IF P1 could do an entire tech piece on this it would be most helpful for the IQ4 owners rather than have us all guess our way through it.
Agreed. In our first IQ4 150mp Frame Averaging article we tried to get at this by saying that the Photographic Triangle was now more of a Photographic Pyramid since this is a new dimension. But building on your Twilight Zone theme, maybe the analogy is more then "upsidedown" dimension from Stranger Things :).

I agree we could use more information from Phase One, but this is the role that good dealers and forums have played in the past. I'm honestly a bit less interested in how Phase One thinks this tool should be used and more interested in how clients use it and what our own testing at DT shows. That said, more points of data/reference are always welcome, especially from the engineers who code all this stuff into existence!
 

dougpeterson

Workshop Member
I tried it with my test scene. Of course, with current firmware and the 2 second limit I had to increase ISO to keep the shutter speed below 2 seconds (since the camera goes into an infinite loop at 2 seconds). I shot:

8 seconds, ISO 50 single frame

1.6 seconds, ISO 250 single frame

1.6 seconds, ISO 250 average from 8 frames

1.6 seconds, ISO 250 average from 20 frames

For normal processing all but the single ISO 250 frame were usable. When pushed aggressively to highlight shadow noise the average of 8 was a little worse than the single ISO 50 frame and the average of 20 frames was a little cleaner.
That's great information. Though it may not hold once you get to 8 seconds; hard to know since they lock us out of testing it :).

In terms of camera workflow though I'd still much rather keep the same 8 second shutter speed and simply activate the FA tool. Otherwise there is a short dance as you adjust ISO up and shutter speed down to get below 2 seconds. That also makes it rather impractical to go back and forth between single frames and FA.
Agree 100%.
 

Craig Stocks

Well-known member
That's great information. Though it may not hold once you get to 8 seconds; hard to know since they lock us out of testing it :).
Agree 100%.
Actually we can simulate a test by taking multiple frames and averaging in Photoshop - which I determined earlier gives nearly identical results. In this case, stacking nine 8-second frames yields a much cleaner image, though to be fair, it was already pretty darn good. I really have to artificially push the exposure and contrast to bring out the grain.
 

Attachments

RLB

Member
I agree we could use more information from Phase One, but this is the role that good dealers and forums have played in the past. I'm honestly a bit less interested in how Phase One thinks this tool should be used and more interested in how clients use it and what our own testing at DT shows. That said, more points of data/reference are always welcome, especially from the engineers who code all this stuff into existence!

While I agree with this to an extent, P1 who designed and built the IQ4 absolutely has thoughtful intent into how the FA should function. This should not be a cat and mouse game of trying to figure out how to make FA work the way it was intended, not the dealer or end user. Sure as artist we will come up with ways P1 may have not considered, but what I'm asking for is the basis of operation.

R
 
Last edited:

Boinger

Active member
I think there is a lot of fundamental misunderstanding going on as to what frame averaging does.

It does nothing more or less than taking 10 pictures and averaging them in photoshop. But does so at the raw capture level via data from the sensor.


Anytime you shoot a long exposure where you capture movement through use of an ND or not you can use frame averaging for that.

The "windy" scenario will only matter when you are trying to capture a scene with a relatively short ish exposure. So that is assuming you have plenty of light and the Noise advantage will really be minimal in terms of simply capturing the shot as intended.

The movement in the scene will be limited by shutter speed so if you need like 1/250 to stop wind movement. You need to shoot at that speed, and in that scenario you will not be able to use frame averaging as there will be movement. But imo you really don't need to be using frame averaging in such light conditions anyway as there is plenty of light. If you really really needed noise performance for shadows you could take 2 shots yourself and average manually and would probably still gain a lot of noise benefits and you could only average the selected areas.

Fundamentally frame averaging in the scenarios where it is useful (low light / boosting shadows) is no different than using a long shutter speed where if there is movement in the scene it will be recorded via the long shutter.

Edit: one final note. Noise adds to perceived detail, that is why sometimes people add noise to make a image look sharper. When a FA image looks to smooth is because it lacks the perceived detail added by noise. A simple experiment will be to add grain to a FA shot and you will see it will look sharper.

An interesting thing is you could probably sharpen a lot more on a FA shot due to no noise you would get far less artifacting.
 

RLB

Member
I think there is a lot of fundamental misunderstanding going on as to what frame averaging does.

It does nothing more or less than taking 10 pictures and averaging them in photoshop. But does so at the raw capture level via data from the sensor.


Anytime you shoot a long exposure where you capture movement through use of an ND or not you can use frame averaging for that.

The "windy" scenario will only matter when you are trying to capture a scene with a relatively short ish exposure. So that is assuming you have plenty of light and the Noise advantage will really be minimal in terms of simply capturing the shot as intended.

The movement in the scene will be limited by shutter speed so if you need like 1/250 to stop wind movement. You need to shoot at that speed, and in that scenario you will not be able to use frame averaging as there will be movement. But imo you really don't need to be using frame averaging in such light conditions anyway as there is plenty of light. If you really really needed noise performance for shadows you could take 2 shots yourself and average manually and would probably still gain a lot of noise benefits and you could only average the selected areas.

Fundamentally frame averaging in the scenarios where it is useful (low light / boosting shadows) is no different than using a long shutter speed where if there is movement in the scene it will be recorded via the long shutter.

Edit: one final note. Noise adds to perceived detail, that is why sometimes people add noise to make a image look sharper. When a FA image looks to smooth is because it lacks the perceived detail added by noise. A simple experiment will be to add grain to a FA shot and you will see it will look sharper.

An interesting thing is you could probably sharpen a lot more on a FA shot due to no noise you would get far less artifacting.

While I agree with most of your post I disagree with a few points.

I believe FA is doing much more "taking 10 pictures and averaging them in PS" . I think the point is that's a workflow we are familiar with, but I beg to differ in that's not exactly what's happening behind the curtains in the IQ4 now, or more importantly what is planned for the future of this significant feature. I don't think we should over simplify what FA is doing, I feel P1 needs to give us all, users and dealers far more insight into this feature so we can exploit it properly, or as it may be, suggest changes in future features for it.

I think its great that folks are willing to spend so much effort testing this...but P1 where are you? Help us understand this feature!

Noise; the addition of noise in post can do two things; hide digital noise (oatmeal squishy style) with smaller sharper fine grain noise, or it can as you suggest add to "perceived sharpness" of the image. If the grain patter looks sharp our brains assume the image is as well. We often add specific kinds of noise when making large format output for these reasons although the type, size and grain structure very based on many factors. It's easy to overdo this, but as with spices the right amount makes it perfect.

R
 

Boinger

Active member
While I agree with most of your post I disagree with a few points.

I believe FA is doing much more "taking 10 pictures and averaging them in PS" . I think the point is that's a workflow we are familiar with, but I beg to differ in that's not exactly what's happening behind the curtains in the IQ4 now, or more importantly what is planned for the future of this significant feature. I don't think we should over simplify what FA is doing, I feel P1 needs to give us all, users and dealers far more insight into this feature so we can exploit it properly, or as it may be, suggest changes in future features for it.

I think its great that folks are willing to spend so much effort testing this...but P1 where are you? Help us understand this feature!

Noise; the addition of noise in post can do two things; hide digital noise (oatmeal squishy style) with smaller sharper fine grain noise, or it can as you suggest add to "perceived sharpness" of the image. If the grain patter looks sharp our brains assume the image is as well. We often add specific kinds of noise when making large format output for these reasons although the type, size and grain structure very based on many factors. It's easy to overdo this, but as with spices the right amount makes it perfect.

R
That is exactly what it is doing, but via the sensor sampling instead of processing it post capture.

My theory is that at a specific pixel it will sample that pixel x amount of times by the chosen number of frames then average the data at that pixel and put that one averaged pixel into the raw file for that specific pixel.

This is done on the whole sensor as it easy to do it this way instead of taking individual captures of 10 pictures of 150MP each and then averaging that in camera. The back doesn't have the processing power for that.
 
Top