The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

IQ4150 Frame Averaging Comparisons

RLB

Member
That is exactly what it is doing, but via the sensor sampling instead of processing it post capture.

My theory is that at a specific pixel it will sample that pixel x amount of times by the chosen number of frames then average the data at that pixel and put that one averaged pixel into the raw file for that specific pixel.

This is done on the whole sensor as it easy to do it this way instead of taking individual captures of 10 pictures of 150MP each and then averaging that in camera. The back doesn't have the processing power for that.
I agree mostly with your theory, but it would be nice if P1 would just tell us. And then explain the theory behind the settings options. I don't think its unreasonable at the price point to expect this.

Doing this at the pixel level as you suggest makes far more sense in consideration of the monster size individual files.

I appreciate, as I suspect others do as well, all of the testing done and information shared.

R
 

Craig Stocks

Well-known member
We know wind blowing leaves and tree branches will show motion blur when using a long exposure. And, it doesn't matter how you achieve the long exposure, either an ND filter or frame averaging will show motion blur. We also know that frame avera8, 16, ging with relatively short shutter speeds can create a stroboscopic effect.

So, how bad is it? It's fairly windy here today so I did a quick test. I shot a single frame at ISO 50, 1/200th @ f/11 as a baseline. I then kept doubling the number of frames, 2, 4, 16, and so on up to 2 minutes. To my eye, even the two minute frame average shows some stroboscopic effects in the blurred leaves and branches but the clouds render just fine.

Depending on the distance to the trees, the size of the print and your expectations they may work fine in print - or they may not. I would expect features like automotive light trails that don't wave back and forth to render much worse. (But, if it's dark enough to photograph light trails from cars you probably don't need an ND filter or frame averaging to create a long exposure.)

I don't have my 10-stop ND filter so I can't do a direct comparison.

The samples here show single frame, 2 frames, 16 frames and 2 minutes (around 150 frames as I recall).
 

Attachments

etrump

Well-known member
While Topaz denoise ai does a great job, I would hardly call it a replacement for noiseless capture. Even with low ISO images areas of fine detail that contain noise can get muddy when trying to remove shadow recovery detail. I love it on skies and large elements though - incredible.

Dave

Do you use the Topaz Denoise AI software.

Just curious is you run the denoise on the none frame averaging shot if the end result will be similar? Topaz has really nailed noise reduction without loss of details.

Paul C
 

etrump

Well-known member
I agree this is a curious limitation. Of course you can adjust the ISO to get the correct exposure. Perhaps the effective SNR is the same or better averaging 4 2 second exposures at ISO 200 than in 1 8 second exposure at ISO 50.

I really hope Phase One can extend the slowest shutter speed. Right now if you want gapless capture you have to live in a very narrow range from around 1/2 second to less than 2 seconds (2 seconds is the max but tends to go into an infinite loop so it isn't really usable). Sensor read speeds probably won't allow faster shutter speeds but I don't understand why it needs to be limited to 2 seconds. Sunset, blue hour and nightscapes frequently go well beyond 2 second shutter speeds and would be common use cases for frame averaging.
 

etrump

Well-known member
This statement confuses me. In a static scene, frame averaging doesn’t smooth out texture that is in the subject. Why would you want noise when you can have true detail in the subject.
The only thing I have noticed over and over is the frame averaged images can take on a creamy look to themselves, very smooth in areas where you might still want texture. This doesn't always happen as bad as other times and it's very easy to get the grain back in C1.

Paul C
 

Paul2660

Well-known member
This statement confuses me. In a static scene, frame averaging doesn’t smooth out texture that is in the subject. Why would you want noise when you can have true detail in the subject.
Hi Ed.

To my eyes it does. Might be lack of noise thus less perceived details. Very easy for me to see in leaves.

Edit. Just went back over some images. It’s the lack of noise that is throwing me. Files are amazing cleaner with just a 2 frame average.

It’s a great step forwards and something I will alway shoot no matter what the scene as many things can be blended back.

Paul C
 
Last edited:

Landscapelover

Senior Subscriber Member
Thank you for your insights!

Is P1 "Frame averaging" any different from Sony "Smooth reflection" for Sony A7R II (discontinued for A7R III)?

Pramote
 

etrump

Well-known member
I agree, it’s something totally new to have a noise free image and requires a new mindset.

The thing that blew me away is how cleanly files upsize. Even ISO 50 images get noisy with upsizing which is something I almost always have to do.

It’s an easy addition to my normal workflow to finish up with a couple frame average exposures before recomposing.

The other thing that is very useful is the ability to stop a long exposure midstream. I don’t know how much time I’ve wasted because walked through a scene only to stop in the middle and take a phone snap of my scene.

Totally useful in almost every situation. Would be nice to have an iso 25 even if it was no cleaner than iso 50.

Hi Ed.

To my eyes it does. Might be lack of noise thus less perceived details. Very easy for me to see in leaves.

Edit. Just went back over some images. It’s the lack of noise that is throwing me. Files are amazing cleaner with just a 2 frame average.

It’s a great step forwards and something I will alway shoot no matter what the scene as many things can be blended back.

Paul C
 

Craig Stocks

Well-known member
Thank you for your insights!

Is P1 "Frame averaging" any different from Sony "Smooth reflection" for Sony A7R II (discontinued for A7R III)?

Pramote
The two seem fundamentally the same with some differences in implementation. Both seem to create an image that is the average of numerous frames calculated directly from sensor data. It's useful for either simulating an ND filter or to reduce noise in shadows.

The Sony Smooth Reflection (SR) app produces both a RAW and JPEG result.

The SR interface provides some settings for specific scenarios as well as a custom setting where you can choose the number of frames, but the choices are in rather large increments.

SR doesn't use an electronic shutter so can't do gapless captures.

SR doesn't seem to have a shutter speed limitation other than the 30 second limit of the Sony camera.

The SR app costs $4.99 whereas Phase One's Frame Averaging tool is free (well, free with the purchase of a $50,000 camera).

Note that the Play Memories apps (like Smooth Reflections) are not supported by newer Sony A7 and A9 cameras. It is still supported on the a6xxx series cameras.

I toyed with it on my Sony cameras a few years ago and promptly forgot all about it. The current discussion has reminded it's available and I envision using it more frequently now. I have it installed on both my A7R2 and an older a6000. In round numbers the A7R2 approaches IQ4150 noise levels with double the frames. The a6000 improves dramatically with averaging but never get very close.
 

vjbelle

Well-known member
Hi Ed.

To my eyes it does. Might be lack of noise thus less perceived details. Very easy for me to see in leaves.

Edit. Just went back over some images. It’s the lack of noise that is throwing me. Files are amazing cleaner with just a 2 frame average.

It’s a great step forwards and something I will alway shoot no matter what the scene as many things can be blended back.

Paul C
I agree that the Frame Sampling images look slightly softer than a long exposure of the same scene. I also agree that it is the lack of 'any' noise that fools my eyes a little.

I also wanted to know how a Frame Sampling image would cope with some slight camera movement. I took a long exposure of 40 seconds and a frame sampling image (2.0s, 40 seconds, 20 images) of the same scene. The frame sampling image was continuous.

I was fairly certain that slight camera movement would have no effect on the long exposure and I was right. But the results for the frame sampling image were also the same. If there is any difference it is so subtle that I can't detect it.

What this means to me is that for a static scene I could possibly have a little wind vibration without loss of image quality.

By movement I mean that half way through each test I physically tapped the side of camera a couple of time - I could see the camera move. I find this very encouraging.

Victor
 

dougpeterson

Workshop Member
I agree that the Frame Sampling images look slightly softer than a long exposure of the same scene. I also agree that it is the lack of 'any' noise that fools my eyes a little.

I also wanted to know how a Frame Sampling image would cope with some slight camera movement. I took a long exposure of 40 seconds and a frame sampling image (2.0s, 40 seconds, 20 images) of the same scene. The frame sampling image was continuous.

I was fairly certain that slight camera movement would have no effect on the long exposure and I was right. But the results for the frame sampling image were also the same. If there is any difference it is so subtle that I can't detect it.

What this means to me is that for a static scene I could possibly have a little wind vibration without loss of image quality.

By movement I mean that half way through each test I physically tapped the side of camera a couple of time - I could see the camera move. I find this very encouraging.

Victor
Camera shake would have identical impact as on a long exposure, with one exception...

Imagine you're doing a single long exposure (e.g. 90 seconds) from a tripod near a road and 45 seconds into that exposure you notice a truck coming your way (that will cause light and vibration), if you choose to end the exposure at that point the image will be one stop under exposed.

Using frame averaging in the same scenario, ending the sequence early will NOT effect the histogram/exposure of the scene. It will simply average fewer frames.

Even if you only notice the vibration and stop it AFTER it has begun you might well get away with it if there are no specular highlights (e.g. street lamps). For example if you are 6 minutes into a frame averaging exposure, bump the tripod, and push the cancel/stop button within two seconds of that bump, you'll have only 2 seconds of "bad" image content blended with 360 seconds of "good" image content, so you shouldn't see much, if any ghosting or loss of sharpness, and again, with frame averaging the final exposure will be correct (and more robust to post processing than a single long capture) no matter what the originally intended exposure length was and how prematurely you ended it. Of course if you were going for a motion smoothing effect (e.g. turning a waterfall into silk) that requires X minutes and you stop the averaging early you won't get quite the intended motion smoothing, but in many cases the difference could be minor (e.g. stopping 2 minutes into a 3 minute frame averaging won't effect the look of motion smoothing that much).

We mention this in my colleague Arnab's frame averaging write up.
 

Landscapelover

Senior Subscriber Member
The two seem fundamentally the same with some differences in implementation. Both seem to create an image that is the average of numerous frames calculated directly from sensor data. It's useful for either simulating an ND filter or to reduce noise in shadows.

The Sony Smooth Reflection (SR) app produces both a RAW and JPEG result.

The SR interface provides some settings for specific scenarios as well as a custom setting where you can choose the number of frames, but the choices are in rather large increments.

SR doesn't use an electronic shutter so can't do gapless captures.

SR doesn't seem to have a shutter speed limitation other than the 30 second limit of the Sony camera.

The SR app costs $4.99 whereas Phase One's Frame Averaging tool is free (well, free with the purchase of a $50,000 camera).

Note that the Play Memories apps (like Smooth Reflections) are not supported by newer Sony A7 and A9 cameras. It is still supported on the a6xxx series cameras.

I toyed with it on my Sony cameras a few years ago and promptly forgot all about it. The current discussion has reminded it's available and I envision using it more frequently now. I have it installed on both my A7R2 and an older a6000. In round numbers the A7R2 approaches IQ4150 noise levels with double the frames. The a6000 improves dramatically with averaging but never get very close.
Thank you very much Craig! I really appreciate your explanation.

I understand much more after I studied about the SR as there are a lot of information and examples on the web.

I wonder why Sony dropped the SR and other Playmemories camera applications for the A7R III. In-camera processing seems to be very convenient.

Yeah! You're right Craig! Phase One's FA is a bargain compared to Sony ($4.99). Very well thought :)


Best

Pramote
 

algrove

Well-known member
I agree that the Frame Sampling images look slightly softer than a long exposure of the same scene. I also agree that it is the lack of 'any' noise that fools my eyes a little. Victor
Victor

I guess my quetion is in fact ARE the FA images softer?
 

Craig Stocks

Well-known member
One point to address some misinformation I've seen on frame averaging - it is NOT the appropriate tool for star trails. Stacking star trails and stacking images for noise reduction are two very different processes and yield different results. Both involve "stacking" but the methods of combining individual frames are different.

For star trails: Shoot continuous frames with the camera stationary on a tripod. Load the frames into Photoshop as layers and set the blending mode to Lighten. The foreground remains aligned and the stars "paint" their way across the sky.

For noise reduction (frame averaging): Shoot continuous frames with the camera stationary or on a tracking mount. Load the frames into Photoshop as layers and align the layers so that the stars are aligned. (If foreground is visible it will no longer be aligned and will appear blurry when the frames are averaged.) Convert the layers to a smart object and set the stacking mode to Mean. This procedure yields a result identical to in-camera frame averaging.

The key difference is the method of combining the frames or layers. Averaging takes the average color and brightness of each pixel, Lighten blend mode uses the brightest value for each pixel. Though they can be similar, one long exposure is not the same as many short exposures, especially when there are moving specular highlights like stars or car lights.

Stacking in Lighten blend mode (such as for start trails) does provide some noise reduction but not as much as averaging. It also makes single pixel noise worse since it combines all of the single pixel noise into the final image rather than averaging it out.

I haven't experimented (yet) but I suspect that light trails from cars will behave like star trails and require either a true long exposure or stacking in Photoshop in Lighten blend mode.

These two examples show the same 30 frames stacked in Lighten blending mode (clear star trails) and averaged (nearly invisible star trails).
 

Attachments

dougpeterson

Workshop Member
One point to address some misinformation I've seen on frame averaging - it is NOT the appropriate tool for star trails. Stacking star trails and stacking images for noise reduction are two very different processes and yield different results. Both involve "stacking" but the methods of combining individual frames are different.

For star trails: Shoot continuous frames with the camera stationary on a tripod. Load the frames into Photoshop as layers and set the blending mode to Lighten. The foreground remains aligned and the stars "paint" their way across the sky.

For noise reduction (frame averaging): Shoot continuous frames with the camera stationary or on a tracking mount. Load the frames into Photoshop as layers and align the layers so that the stars are aligned. (If foreground is visible it will no longer be aligned and will appear blurry when the frames are averaged.) Convert the layers to a smart object and set the stacking mode to Mean. This procedure yields a result identical to in-camera frame averaging.

The key difference is the method of combining the frames or layers. Averaging takes the average color and brightness of each pixel, Lighten blend mode uses the brightest value for each pixel. Though they can be similar, one long exposure is not the same as many short exposures, especially when there are moving specular highlights like stars or car lights.

Stacking in Lighten blend mode (such as for start trails) does provide some noise reduction but not as much as averaging. It also makes single pixel noise worse since it combines all of the single pixel noise into the final image rather than averaging it out.
I guess it's somewhat a semantic-definition thing, but I'd consider Max-hold ("Lighten" in Photoshop terms) to still be frame-averaging (frame "stacking" is probably more accurate but too generic IMO). It's just frame averaging with Max rather than Mean.

Mean basically acts exactly as a single long exposure would. Max is decidedly different than any single capture method. As is Median or Mode which would allow for removal of crowds from street scenes and other interesting visual effects.

My senior thesis was on star trails. In Winter. In Ohio. BRRRRRRR.

I'd LOVE to see Max and Median/Mode frame averaging added to the IQ4's frame averaging tool.


I haven't experimented (yet) but I suspect that light trails from cars will behave like star trails and require either a true long exposure or stacking in Photoshop in Lighten blend mode.
Most car light trail imagery I've seen or done has been with a single long exposure, so would be well served by P1's current implementation of frame averaging.

Edit: Of course, Phase One is the first company I'm aware of to offer totally gapless frame averaging, so maybe that's why I haven't previously seen frame-averaging car light trail imagery.
 
Last edited:

Craig Stocks

Well-known member
Mean basically acts exactly as a single long exposure would. Max is decidedly different than any single capture method. As is Median or Mode which would allow for removal of crowds from street scenes and other interesting visual effects.\
I respectfully disagree, particularly when moving highlights like stars or car lights are involved. Consider the example of a single 30 second exposure of a dark scene where during that time a strobe fully illuminates a subject in the scene. That subject will be fully exposed at the end of the strobe flash and will still be rendered as fully exposed at the end of 30 seconds. Now consider a frame average of 30 1-second exposures where that same strobe illuminates the same subject during one of those frames. In that case the fully illuminated pixels will be averaged with 29 other samples of dark pixels. The result will be that the subject will not be rendered as fully illuminated, it will be 1/30th illuminated and 29/30ths dark. A single exposure is the summation of all illumination during the exposure which is different from the average of multiple samples taken during the same period.

I believe my test illustrates the difference for star trails.

I'd LOVE to see Max and Median/Mode frame averaging added to the IQ4's frame averaging tool.
I agree, and we might as well throw in Minimum and other methods too. Maximum would be useful for lightpainting, star trails and automotive streaks. Mode is really useful best for removing moving elements in a scene, such as people or cars. It terms of programming though Mode is probably much more memory intensive since you can't (as far as I know) calculate a rolling mode in the same way you can calculate a rolling mean, minimum or maximum.
 

Paul2660

Well-known member
We are getting way off course here, but I would state that with Star trails, I find the best solution is running a Max, then Mean on same Smart object, then layer the two back together. Max will pull much more noise, Mean almost none, and a layered blend seems to get the best results for me. Working with a partial moon 1/2 to 3/4 will give the photographer almost daylight looking foreground areas, which I greatly prefer over dark, or pushed ISO, noise etc. Most non educated photographers just feel it’s faked, but sadly it’s actually not, but instead a ton of work. Not to say there is only one way to do this, as there are many, this just works the best for me.

You will always have gaps, even in a single long exposure. They are very faint with a single long exposure but still there, where as with a long series of 1.5 minute exposures, you will have slight gap at where each frame end. Most obviously the camera cannot be moved at all, or the trails will align with a drop, or jerk where the camera move happened, even hitting the playback button can be enough to screw up a long 1.5 hour series.

The only way I know to smooth them out is the additional step of running Startracer, which moves the image to catch the gaps and of course blurs any landscape portion, which has to be blended back. Startracer works with any wide angle lens, but is harder to get a solution on the MF wides.

Paul C
 

Craig Stocks

Well-known member
This pair of samples compares one 8-second exposure where I walked through the scene holding a flashlight (pointed at the camera) and a frame average of 8 1-second exposures with ISO adjusted to achieve the same overall exposure (ISO 50 and ISO 400).

There are two things of note:

1 - in the single frame the light streak is full exposed at 100% luminosity (RGB 255,255,255) and is completely opaque. In the stacked frame the light streak is not fully exposed (around 85% luminosity) and you can see some detail behind the light. That's due to the averaging nature of having one frame with the light and 7 frames without the light.

2 - Notice that in the averaged version the light streak is pink/magenta rather than white or gray. I've been working with my dealer to create a support case for what appears to be a problem in the calculations. Anytime a bright highlight is only present in a very small percentage of the frames it's rendered pink/magenta rather than gray. A similar stack in Photoshop would render it as gray. My guess is that it has to do with how the two green channels are handled in the calculation. It doesn't seem to matter if the highlight is from an LED, tungsten or flash source. The problem is very repeatable on my back.
 

Attachments

Paul2660

Well-known member
Craig.

Good catch. Wayne Fox noticed this effect in some shots he took at the ocean. Same type of thing. Small areas of bright light on the water.

Makes me wonder how this will work on a steam flowing in sunlight.

Paul C
 

Craig Stocks

Well-known member
The only way I know to smooth them out is the additional step of running Startracer, which moves the image to catch the gaps and of course blurs any landscape portion, which has to be blended back. Startracer works with any wide angle lens, but is harder to get a solution on the MF wides.

Paul C
I've had good luck simply stacking with Lighten blend mode and haven't noticed gaps when using the IQ3100, but maybe I've just been lucky. I use the time lapse tool with zero gap and enable the electronic shutter. StarStax has a method to fill in gaps in star trails and it seems to work pretty well but it can't work with P1 RAW files so you need to create a set of JPEGs or TIFFs first. I've used it with some Sony files that are more prone to showing gaps.
 
Top