The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

New IQ4 Feature - Frame Averaging

dougpeterson

Workshop Member
The Phase One IQ4 may not yet be shipping, but it's already received its first new feature: Frame Averaging.

You can read about it here: Phase One IQ4 Frame Averaging.

It's likely this feature will be enabled in the pre-production unit that DT will have at Photo Plus Expo in NYC in a couple weeks.
 

med

Active member
Sounds very cool and promising, thanks for sharing. Another feature that will help make the IQ4-150 the ultimate tech camera back.

I was thinking of buying some new NDs (both 10 stop and grads) but perhaps I will hold off and add that money to my IQ4 piggy bank. Will be about 1/64th of the way there :)

Would it be erroneous to think of this as a form of HDR, or are the multi exposures all the same EV?

Would be interesting to combine something like this with the XF focus stacking tool to do in camera automated focus stacking that produces a single RAW file.
 
Very cool, looking forward to trying it out. Reading the the article a couple of interesting things that jumped out were the fact that the averaging is done before A2D conversion and that with electronic shutter you get zero gaps. These things should make this feature more useful than just doing a burst and averaging in post later.
 

dougpeterson

Workshop Member
Would it be erroneous to think of this as a form of HDR, or are the multi exposures all the same EV?
The feature discussed in the article is for multiple reads at the same exposure/EV. The tonal range of the image remains the same, but the shadows withstand far more recovery without showing noise.

I think there are many additional ways the underlying technology can be applied on the IQ4 platform in the future. Exciting time.
 

dougpeterson

Workshop Member
This sounds like an in-camera implementation of Guillermo Luijk's Zero Noise and RawFusion (2007/8).
Yes, absolutely.

Quoting from the middle of our article...
At its heart, this tool works by averaging two or more (often many more) sequential captures together, before the raw file is generated. This has the effect of evening out noise in the shadows. With two shots the noise should be roughly half as much as a single capture (which is already extraordinarily low), with four captures it should be roughly half as much again, and yet again half at eight captures. In theory this technique can be used by anyone with any camera by capturing more than one image of the same scene and layering them with a low opacity in Photoshop or via specialized software. However, in practice, the specifics of how the IQ4 150mp does frame averaging make this tool far more useful than doing it manually with another camera. Let’s take a look at the technical components that make the IQ4 150mp frame averaging unique.
 

Wayne Fox

Workshop Member
this sounds pretty sweet.

I can understand how this can help lift shadows, so with 2 or 3 stops of ND or ND grads sounds very plausible, but to duplicate a 10 stop ND filter in bright light, wouldn’t this involve averaging hundreds of captures? Seems like the only way to duplicate the “blur” of an ocean (getting it totally flat and smooth as mentioned in the article) would be to capture 15-20 frames a second over the same amount of actual time (minutes).

Does it average “on the fly”, basically averaging each capture into the stack while it’s capturing the next image?
 

Wayne Fox

Workshop Member
Only offered with the 150 megapixel models. What a sales incentive.
I guess you can look at it that way, but it’s simply what has been happening with technology advances in all electronics. None of their other backs have even close to enough CPU horsepower to pull this off. I”m sure over time as powerful chips for mobile devices get more powerful and cheaper other cameras will duplicate this (current smart phones already have this).

As one who purchased a 45” “big” screen TV that was the 4 feet tall and two feet deep for $5000, a “flat” panel 60” TV that was $15,000, a 10MB (yes Megabytes) hard drive for over $1000 when they were the new thing, and was excited because I could play Brickles on my new Nokia cell phone that would actually almost fit in my pocket, nothing surprises me anymore and I’m just glad I’m still around to enjoy some of this stuff.
 

Boinger

Active member
I guess you can look at it that way, but it’s simply what has been happening with technology advances in all electronics. None of their other backs have even close to enough CPU horsepower to pull this off. I”m sure over time as powerful chips for mobile devices get more powerful and cheaper other cameras will duplicate this (current smart phones already have this).

As one who purchased a 45” “big” screen TV that was the 4 feet tall and two feet deep for $5000, a “flat” panel 60” TV that was $15,000, a 10MB (yes Megabytes) hard drive for over $1000 when they were the new thing, and was excited because I could play Brickles on my new Nokia cell phone that would actually almost fit in my pocket, nothing surprises me anymore and I’m just glad I’m still around to enjoy some of this stuff.
The sony a7rii had this exact same feature in their "app store" So I don't really think it has anything to do with processing power.

https://www.slrlounge.com/top-5-apps-for-your-sony-camera/
 

Christopher

Active member
That’s not correct all IQ4 backs have the same CPU power, however, it’s clear to anyone who understands the technical side a little bit better, that the older sensor just can’t do it...
 

hcubell

Well-known member
Thanks, Doug. Very glad to see Phase persevering in the face of the Fuji tsunami. Now, if Phase would only cut the weight and size of the XF and lenses by 60% or so.
Can you elaborate on how this feature eliminates the need for Graduated NDs or the use of HDR techniques in post.
 

Wayne Fox

Workshop Member
The sony a7rii had this exact same feature in their "app store" So I don't really think it has anything to do with processing power.

https://www.slrlounge.com/top-5-apps-for-your-sony-camera/
I tried that app, and it was pretty good but had some problems. I’m not sure but it seemed it was capturing the images then doing the stack in the camera, so basically same thing you would do on a computer. From the description of this feature it is functioning differently where the stacking is done before he A/D converter. Not sure. Images I captured with my Sony were nice, but they were visually al little different than the same image with and ND filter. That might very well be true of this IQ4150 well. All I know is the day my new back arrives, I’m headed to either Oregon for waterfalls or some where on a coast (good excuse to go to Hawaii :eek: )

And of course a 42mp Sony would require substantially less computing power to handle the data as compared to 100 or 150 mp sensor.

This concept has been around for a while, I don’t think Phase waited to implement it just so they could help drive sales of a future back. Something in the tech of the current back makes it possible. I still think part of it is the computing power, and the article seems to indicate this as well. But Christopher makes a good point, obviously there’s something in the 150 sensor tech that makes it possible as well.
 
Last edited:

f8orbust

Active member
this sounds pretty sweet.

I can understand how this can help lift shadows, so with 2 or 3 stops of ND or ND grads sounds very plausible, but to duplicate a 10 stop ND filter in bright light, wouldn’t this involve averaging hundreds of captures...
In Luijk's original explanation of the technique (which he refers to as overexposure blending), only two exposures are needed - the first correct according to your usual workflow, and the second 4-stops overexposed (all tripod mounted, and no moving subjects).

He explains why it works so well here:

In overexposure blending SNR (Signal to Noise Ratio) improves by 2^M, with being M the number of f-stops apart the second shot was taken. So in case M=4 is chosen, we would reduce noise with just two shots to 1/16 the noise found in the shadows of the original image.

It's a really useful technique on any sensor, but particularly CCD ones which tend to be noisier in the shadows (for a given exposure) compared to CMOS.
 
Last edited:

f8orbust

Active member
Obviously it's nice to have this in-camera, but it'd be waaaaaaaaaaaaaaaaaaaaaaaaaaaaay more useful to have the functionality added to C1 - i.e. just select a suitable pair of images and hit the 'zero noise' button.
 

Boinger

Active member
I tried that app, and it was pretty good but had some problems. I’m not sure but it seemed it was capturing the images then doing the stack in the camera, so basically same thing you would do on a computer. From the description of this feature it is functioning differently where the stacking is done before he A/D converter. Not sure. Images I captured with my Sony were nice, but they were visually al little different than the same image with and ND filter. That might very well be true of this IQ4150 well. All I know is the day my new back arrives, I’m headed to either Oregon for waterfalls or some where on a coast (good excuse to go to Hawaii :eek: )

And of course a 42mp Sony would require substantially less computing power to handle the data as compared to 100 or 150 mp sensor.

This concept has been around for a while, I don’t think Phase waited to implement it just so they could help drive sales of a future back. Something in the tech of the current back makes it possible. I still think part of it is the computing power, and the article seems to indicate this as well. But Christopher makes a good point, obviously there’s something in the 150 sensor tech that makes it possible as well.
Frame Averaging is exactly that though. You take multiple exposures average them together to get the result. I don't think it has anything to do with the sensor especially when using a term like frame averaging.

Averaging a raw file in post processing or averaging in camera shouldn't matter as the data is still raw. You cannot average exposures before the A/D process ad the data would be analog which in a computer you cannot average... Just doesn't make sense.

Don't get me wrong the feature is nice and convenient beats having to average multiple exposures manually like 30 or 40 sometimes. But I don't think there is anything special happening here.

There have been lots of camera's with neat features like i think in old panasonic's you could see a long exposure as it is exposed. That would be cool
 

Wayne Fox

Workshop Member
You cannot average exposures before the A/D process ad the data would be analog which in a computer you cannot average... Just doesn't make sense.
Just referencing what what stated in the article. Not sure why it doesn’t make any sense, seems you could average a series of analog signals and then send the results of that average to the A/D convertor, the same as you could average the digital results of each frame.

from the article “ In technical terms the averaging is occurring before the A/D conversion which leads to lower noise.”

I agree (and stated earlier) the concept isn’t new, but it does sound like the implementation is a little unique and pulling it off on a back with that many megapixels certainly had to be challenging.
 

gerald.d

Well-known member
Intriguing.

Can you set the “averaging” method?

Modal would be nice for removing cars and pedestrians completely from a street scene.
 
Top