The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Phase One IQ4 - Feature Update 1

Boinger

Active member
Thank you for your reply!

I am using my IQ3 100T with XF body most of the time. Not going to tech camera in short term. May I know how great the image quality compare to IQ3 100T? Is this a very huge improvement if I only use it in XF body? It seems a very good improvement in the tech cam!

Thanks again!
Unfortunately I don't have an XF so I couldn't answer your question there. I would guess it is marginally better in terms of DR / noise. But if it is not on a tech cam I would think twice.
 

f8orbust

Active member
To quote our article on IQ4 Automatic Frame Averaging

"At its heart, this tool works by averaging two or more (often many more) sequential captures together, generating a single raw file. This has the effect of evening out noise in the shadows. With four samples the noise should be roughly half as much as a single capture (which is already extraordinarily low), with sixteen samples it should be roughly half as much again. In theory this technique can be used by anyone with any camera by capturing more than one image of the same scene and layering them with a low opacity in Photoshop or via specialized software. However, manual frame averaging requires capturing many gigs worth of raws, processing even more gigs worth of TIFFs, and minutes (or even hours) worth of computer time; just to generate a single output image. The IQ4 does exposure stacking internally, on the fly, and generates a single raw file ready for immediate use. Moreover, the IQ4 can do it entirely free of temporal gaps and entirely free of vibration."
I remember trying Luijk's 'Zero Noise' when it was released back in 2008 (sadly he hasn't updated it in a while). It worked (works) really well, but he did mention i) that the fewer images you capture the better, as it reduces the potential to lose sharpness between captures due to micro camera movements, and ii) there was little point in taking more than 4 images, so he settled on three (-2EV, 0EV, + 2EV) as an optimal strategy. Given the improvement in sensor design over the past decade, I would imagine that in some situations, just 2 images should suffice (0, +2EV).
 

dougpeterson

Workshop Member
I would stress that the ability to have the user determine the method of averaging would open up even more possibilities.

Doug - you explain how with longer “total times”, moving cars would become a “sea of smoke”. This of course makes total sense. But with modal averaging, the cars would disappear completely and you’d have an empty road.
Currently “Mean” averaging only, but other forms of multiple frame combination would also interest both you and me. For example, median*, max, min, all have some specific use cases as well as more complex math to combine exposure brackets. As you say, once the foundational UI and technical work are done on a tool that opens up an entirely new way of shooting a bunch of new ideas immediately start flowing. It’s really very exciting.

A lot depends on user feedback once the tool is available. So make sure your dealer knows how interested you are in alternative math options for this tool. The user base for Phase One isn’t millions of people; every person counts for a lot.

*I think mode wouldn’t work well in a 16-but environment (how likely is it for the exactly same value to occur several times) but I believe median would have the same effect you describe
 
Last edited:

Peegeenyc

New member
'Frame averaging' is of course what every better cellphone has been doing for the past 3-4 years.

Google's 'night shot' mode is the most spectacular implementation of it (handheld at that!) Plus there's 'computational RAW' on their Pixel phones, with free file storage in the cloud. Google seem the class of the field in such developments at the moment.

Interesting interview with Google's computational imaging guru here
and deep dive White Paper here .

Admittedly, it's a lot easier to implement this in a tiny fast flush sensor than a full MF one, so congrats to Phase.
That said, I expect a firmware update by the big names (Sony, Fuji) will bring this into their offerings soon enough.
 
Last edited:

dougpeterson

Workshop Member
'Frame averaging' is of course what every better cellphone has been doing for the past 3-4 years.
Perhaps as a foundational technology that is true. But not as a tool that the photographer has direct/powerful control over in the same way they can control aperture, shutter, iso.

It's very rare that any company invents something 100% new from scratch. In fact, technically you could do "frame averaging" film by shooting many pin-registered frames, and physically stacking them for projection/viewing. I wouldn't be surprised if this was a technique employed in film-based aerial surveillance (I don't know either way).

But I think it's fair to say that because Phase One added this as a native, well-implemented, simple-to-use feature, the number of serious landscape and architecture photographers doing frame averaging on a routine basis will go from almost-nobody to most IQ4 users. That's a real achievement and shows really strong commitment to serious photographic tools.

But I'm biased, and until the feature update is ready for public download (expected next week) and users start to give it a try in the real world, it's all just (incredibly interesting and motivating) talk.
 

dougpeterson

Workshop Member
That said, I expect a firmware update by the big names (Sony, Fuji) will bring this into their offerings soon enough.
Maybe. Only time will tell. But it's been three years since Phase One added fully automated focus stacking to the Phase One XF and nobody else has done it.

Other companies have added focus bracketing, or a focus stacking mode where you have to manually guess-and-check at the relevant values (aperture/step size) and manually keep track of the start/end of the sequence during post. With Phase One's implementation you just indicate the front and back of the subject and the step size is auto calculated and the sequence is tracked as a set in Capture One.

I think most users far underestimate how important a companies mission and target audience is in determining how a camera is developed. As a camera maker, if your target audience is nostalgia-driven, or branding-based, or fashion/sex-appeal oriented that will drive a lot of where you put time and money. If your target audience is driven entirely by image quality, feature set in a professional/serious setting, and overall experience as a tool that will drive you to a very different set of decisions.

Anyway, I hope I'm wrong. I own a Fuji XH-1 and would love to be able to use this kind of feature on that camera for when I don't have a P1 kit on me.
 

dougpeterson

Workshop Member
We've updated our article with new ISO50 comparisons at a high zoom level so you can easily examine the detail/noise change online. We will also have the raws available for direct download in about an hour so you can dive deep into them within Capture One.
 

JimKasson

Well-known member
Gaps an issue?

The gapless technique is implemented with the "best-in-class sensor-based Electronic Shutter system." Do other cameras with ES have a noticeable gap between exposure when using ES?
When I've done the averaging in post-production, I've never noticed an issue with gaps between exposures if you keep the indivifual captures on the order of a second or longer. If you do this with another camera besides the IQ4, it's best to use ES, which minimizes vibration. It's pretty easy to average the images set in Ps, or you can use astro software or roll your own. I've done all three.

https://blog.kasson.com/the-last-word/an-mf-camera-in-your-jacket-pocket/

Jim
 

SrMphoto

Well-known member
'Frame averaging' is of course what every better cellphone has been doing for the past 3-4 years.

Google's 'night shot' mode is the most spectacular implementation of it (handheld at that!) Plus there's 'computational RAW' on their Pixel phones, with free file storage in the cloud. Google seem the class of the field in such developments at the moment.

Interesting interview with Google's computational imaging guru here
and deep dive White Paper here .

Admittedly, it's a lot easier to implement this in a tiny fast flush sensor than a full MF one, so congrats to Phase.
That said, I expect a firmware update by the big names (Sony, Fuji) will bring this into their offerings soon enough.
Would love to see that feature in Sony, Fuji and Nikon cameras. However, I doubt that Sony would implement it as they have not even implemented multiple exposure or focus bracketing.
 

drunkenspyder

Well-known member
Maybe. Only time will tell. But it's been three years since Phase One added fully automated focus stacking to the Phase One XF and nobody else has done it.

Other companies have added focus bracketing, or a focus stacking mode where you have to manually guess-and-check at the relevant values (aperture/step size) and manually keep track of the start/end of the sequence during post. With Phase One's implementation you just indicate the front and back of the subject and the step size is auto calculated and the sequence is tracked as a set in Capture One.

I think most users far underestimate how important a companies mission and target audience is in determining how a camera is developed. As a camera maker, if your target audience is nostalgia-driven, or branding-based, or fashion/sex-appeal oriented that will drive a lot of where you put time and money. If your target audience is driven entirely by image quality, feature set in a professional/serious setting, and overall experience as a tool that will drive you to a very different set of decisions.

Anyway, I hope I'm wrong. I own a Fuji XH-1 and would love to be able to use this kind of feature on that camera for when I don't have a P1 kit on me.
Spot on. Nikon has focus stacking, and though implemented and updated after Phase, it still, well, sucks. While one might expect that AFA will appear soon on other platforms—I think it is more of a game-changer than focus stacking, and so should be more desirable—that will be proven or not in time.
 

JimKasson

Well-known member
Spot on. Nikon has focus stacking, and though implemented and updated after Phase, it still, well, sucks. While one might expect that AFA will appear soon on other platforms—I think it is more of a game-changer than focus stacking, and so should be more desirable—that will be proven or not in time.
I agree about the Nikon FSS implementation. The Fuji GFX one is better. There's an issue with the "pick near, pick far, pick number of captures" process in that there is no control of the CoC of the steps. I prefer "pick near, pick far, pick step size, let number of captures fall where it may", which Fuji doesn't have either. With the GFX it's "pick near, pick step size, pick number of steps that will be adequate, throw away extra images in post."

Jim
 

dougpeterson

Workshop Member
I agree about the Nikon FSS implementation. The Fuji GFX one is better. There's an issue with the "pick near, pick far, pick number of captures" process in that there is no control of the CoC of the steps. I prefer "pick near, pick far, pick step size, let number of captures fall where it may", which Fuji doesn't have either. With the GFX it's "pick near, pick step size, pick number of steps that will be adequate, throw away extra images in post."
"Pick near, pick far, pick step size, let number of captures fall where it may" still doesn't account for changing apertures or changing magnification range on a given lens.

The P1 solution for this set of problems was time intensive on their part (doing all the math and practical studies to verify it) but automatically accounts for all variables; just set the front and back (which are a subjective user decision that can't be automated) and push "go". This updates in real time as you change the aperture so you can make an intelligent trade off between absolute sharpness (e.g. using f/8) and workflow speed (e.g. using f/16 even though it will be a bit diffracted because it will greatly reduce the number of frames). Plus the XF adds metadata tracking for intelligent batch post processing (easy enough for them to do since they make both the hardware and software).

But we have many clients using it on a daily basis, including one that is exclusively shooting 2000px wide eCommerce images with the XF, specifically because of the speed and reliability of its focus stacking workflow. So it would seem to have been worth it.

If you haven't used the XF method before, I'd love to show it to you sometime. Even if you're never going to use an XF, it's kind of fun as a demonstration of technology and UI choices that I think you, as an incredibly smart guy, would really enjoy seeing.
 

gerald.d

Well-known member
Maybe. Only time will tell. But it's been three years since Phase One added fully automated focus stacking to the Phase One XF and nobody else has done it.
CAPCam did it. Taking into account necessary tilt/swing adjustments through the stack. Now that is something that no other platform on the planet can claim.

On a more pedestrian level, ignoring tilt/swing, so have Panasonic.

https://www.youtube.com/watch?v=BJll9V6BS8o

In camera.
 

dougpeterson

Workshop Member
Re: Gaps an issue?

When I've done the averaging in post-production, I've never noticed an issue with gaps between exposures if you keep the indivifual captures on the order of a second or longer. If you do this with another camera besides the IQ4, it's best to use ES, which minimizes vibration. It's pretty easy to average the images set in Ps, or you can use astro software or roll your own. I've done all three.

https://blog.kasson.com/the-last-word/an-mf-camera-in-your-jacket-pocket/

Jim
As usual, your brilliance shines through in this article!

But that you wrote a MatLab routine to automate the combination of files in post perfectly illustrates why a simple/fast/powerful/native implementation of frame averaging is such a game changer.
 

Boinger

Active member
That said, I expect a firmware update by the big names (Sony, Fuji) will bring this into their offerings soon enough.
Would love to see that feature in Sony, Fuji and Nikon cameras. However, I doubt that Sony would implement it as they have not even implemented multiple exposure or focus bracketing.

This feature was actually present in sony a long time ago. But I don't think you could store as raw.

https://www.playmemoriescameraapps.com/portal/usbdetail.php?eid=IS9104-NPIA09014_00-000011

https://www.playmemoriescameraapps.com/


Sadly they have discontinued the apps in their newer cameras.
 

dougpeterson

Workshop Member
CAPCam did it. Taking into account necessary tilt/swing adjustments through the stack. Now that is something that no other platform on the planet can claim.
I enthusiastically accept this correction. CAPCam absolutely did do this and deserve an equal mention alongside the XF.

It reinforces my thesis for "who you are targeting determines what you do" regarding camera development. Obviously CAPCam is not targeting nostalgia, fashion, brand-building, or casual users; they are razor focused on a specific kind of user for whom this capability is very useful.

On a more pedestrian level, ignoring tilt/swing, so have Panasonic.
I hadn't seen this, but this implementation is not remotely close to something I could imagine one of our studio clients using in a production environment.
 

gerald.d

Well-known member
I enthusiastically accept this correction. CAPCam absolutely did do this.



I hadn't seen this, but this implementation is not remotely close to something I could imagine one of our studio clients using in a production environment.
Hence the “on a more pedestrian level” qualifier.

I just happen to believe that the IQ4 is a strong enough (part of a) platform to stand on its own merits without needing to make dubious claims for its capabilities compare to what other companies have delivered.

It just has to be the best.

It doesn’t need to claim to be the first.
 

RLB

Member
Perhaps as a foundational technology that is true. But not as a tool that the photographer has direct/powerful control over in the same way they can control aperture, shutter, iso.

It's very rare that any company invents something 100% new from scratch. In fact, technically you could do "frame averaging" film by shooting many pin-registered frames, and physically stacking them for projection/viewing. I wouldn't be surprised if this was a technique employed in film-based aerial surveillance (I don't know either way).

But I think it's fair to say that because Phase One added this as a native, well-implemented, simple-to-use feature, the number of serious landscape and architecture photographers doing frame averaging on a routine basis will go from almost-nobody to most IQ4 users. That's a real achievement and shows really strong commitment to serious photographic tools.

But I'm biased, and until the feature update is ready for public download (expected next week) and users start to give it a try in the real world, it's all just (incredibly interesting and motivating) talk.


That would be contrast masking...done for more than 50 years in the darkroom and occasionally in camera (large format...ala custom ND filter). Same issues; bringing the dynamic range into the realm of the intended capture/output media/device.

Very excited that Phase has, or shall I say "will" have this for us next week. It will be a huge time saver in some scenarios. Next up: auto stitch capture with a motorized Tech camera.
 

dougpeterson

Workshop Member
Very excited that Phase has, or shall I say "will" have this for us next week. It will be a huge time saver in some scenarios. Next up: auto stitch capture with a motorized Tech camera.
That would be great! Or at least some steps in the direction of making pano stitching on a tech camera faster/easier, especially on the post processing side. I'm not positive I'd personally want a tech camera with motorized anything; part of the joy of tech cameras to me is their mechanical/traditional/visceral/not-a-gadget-but-instead-a-craftsman-tool zen. But I could see the appeal of motorized movement for some users.

I love pano stitching with a tech camera. More res, pano aspect ratio, easy to compose in the field compared to nodal stitching. It's even more enjoyable with an IQ4 because of its drastically reduced color cast in tech camera use, but it's still more involved/tedious than it needs to be.
 
Top