The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

IQ4 recording at wrong image format, 14 bit instead of 16 bit EX

etrump

Well-known member
It does appear that P1 has one part time college student working on IQ4 firmware after dinner and a few drinks.

Fixes that should take a few hours take weeks and add embarrassing bugs that could easily be unearthed by even the most rudimentary testing. Most companies have a testing protocol and fleet of beta testers with nightly firmware feeds to work through issues before the public ever sees them.

I’ve been a phase customer for more than ten years and this is NOT business as usual for them. Release issues have been minor up to this camera with most issues resolved in a couple of months from release.

Fortunately for Phase the results with the IQ4150 are substantially better than anything else available. I, for one, still love the camera, deal with the frustration and hope for the best.

And yes it does appear my hair is falling out just look at my profile shots over the past ten years. ;)

Am I the only one who somewhat loses the will to live when reading this list? I’d quite like to concentrate on the creative process and the usual technical challenges without having to remember that the back only does colour images when set to IQ23LS mode provided that it’s not a Tuesday unless it’s the third Tuesday of the month (other than June) in which case you have to use it upside down.

Get real Phase.
 

dougpeterson

Workshop Member
We asked Phase One to shed some light on this, and we received the following from the Head of Support for Phase One US:

When using Automatic Frame Averaging (AFA) the IQ4 does not capture in a native 14-bit or 16-bit mode as it does in a normal single shot mode. Rather, the number of discreet integers recorded per pixel per frame will fall somewhere above 16,384 (14-bit equivalent) and below 65, 536 (16-bit equivalent). All the data of those frames is combined and then averaged based on the total number of frames and bit depth of the selected raw format. In this way the total capture data can greatly exceed that of 16-bit single capture and be saved as either file format.

If we compare that to single shot capture, the sensor will only ever capture 14 bits of data when the file type is set to 14 bit, or 16 bits of data when the file type is set to 16 bits. The AFA file is still built upon that full amount of original capture data regardless of the bit depth it was save at.

File Compression is can be greatly improved on AFA files thanks to efficiencies produced by the data averaging process. AFA files consisting of many frames will have virtually no noise. This improves file efficiency because noise maps can be difficult to effectively compress. Ultimately, images captured in AFA will often be much smaller on disk than their single shot counterparts.

Specifications aside, the proof is in the image quality.
In other words (as Steve correctly reported earlier in this thread), the use of a 14-bit (IIQ-L) format to store the results of a frame averaged capture is the intended behavior; not a bug or an accident.

I'm disappointed about the poor communication around this topic prior to the release of this firmware, and the other several mis-steps in the path to unlocking the full promise of the IQ4. The IQ4 remains the best camera made, but that superlative makes the open items (e.g. still no ad hoc wifi among other notables) more frustrating, not less. Hopefully Phase One can learn from these mis-steps and come to the table with solid well-communicated feature updates for the IQ4 in the coming months.

But since the mis-reported-bit-depth bug seems to have been squashed, it seems frame averaging can now safely be used. While I'm deeply interested in the technical underpinnings (I'm quite a large nerd) and am still trying to make sense of Phase One's reply above, ultimately the final image quality is what matters. I agree with "the proof is in the image quality". So I encourage everyone to go forth and frame average and post your results, so we can all judge together what image quality frame averaging reaps, as currently implemented.
 

Craig Stocks

Well-known member
You're right, the proof is in the image. Continuing from my post and samples I shared earlier today, these two 100% crops compare a single frame at ISO 50 16 bit EX format and 200 frames averaged, then pushed A LOT in C1 (max exposure and shadows, also increased contrast and saturation). The frame averaged sample is basically still noiseless, even when pushed. Regardless of bit depth, there is a lot of pliability in the file.
 

Attachments

RLB

Member
We asked Phase One to shed some light on this, and we received the following from the Head of Support for Phase One US:



In other words (as Steve correctly reported earlier in this thread), the use of a 14-bit (IIQ-L) format to store the results of a frame averaged capture is the intended behavior; not a bug or an accident.

I'm disappointed about the poor communication around this topic prior to the release of this firmware, and the other several mis-steps in the path to unlocking the full promise of the IQ4. The IQ4 remains the best camera made, but that superlative makes the open items (e.g. still no ad hoc wifi among other notables) more frustrating, not less. Hopefully Phase One can learn from these mis-steps and come to the table with solid well-communicated feature updates for the IQ4 in the coming months.

But since the mis-reported-bit-depth bug seems to have been squashed, it seems frame averaging can now safely be used. While I'm deeply interested in the technical underpinnings (I'm quite a large nerd) and am still trying to make sense of Phase One's reply above, ultimately the final image quality is what matters. I agree with "the proof is in the image quality". So I encourage everyone to go forth and frame average and post your results, so we can all judge together what image quality frame averaging reaps, as currently implemented.

It's difficult to fully understand this as its a totally new way to approach image capture. As many had thought (since we were not told different) that FA was essentially a version of "in camera HDR", taking several single captures and and merging them automatically. Its easy to understand that concept as may of have done it for years to achieve greater dynamic range when needed. With the single capture approach (as we know) setting the bit depth before the shutter is released locks one into a specific bit depth parameter, however with Phase's approach to FA, this is no longer a factor as the way the sensor is receiving light data and the math is crunched is far more complex than simply combining a few shots in the way HDR does. As I understand it at this point, this opens up many more possibilities for how that file is compressed (and without ANY data loss) and how the noise can be eliminated from shadows and so on. Theoretically it opens up a possibility for much further dynamic range and bit depth than HDR could ever do, and a possibility for far greater compression of file size without any quality loss (another concept we are not familiar with).

My take is that in Phase One's attempt to over-simplify what FA actually does it was lost in translation and often understood as the closest cousin we already knew; HDR. As we know now its far more complex and versatile. Thank you to the folks who have posted the testing, the proof is in the results whether or not we understand the science behind it.

Robert
 

Steve Hendrix

Well-known member
It's difficult to fully understand this as its a totally new way to approach image capture. As many had thought (since we were not told different) that FA was essentially a version of "in camera HDR", taking several single captures and and merging them automatically. Its easy to understand that concept as may of have done it for years to achieve greater dynamic range when needed. With the single capture approach (as we know) setting the bit depth before the shutter is released locks one into a specific bit depth parameter, however with Phase's approach to FA, this is no longer a factor as the way the sensor is receiving light data and the math is crunched is far more complex than simply combining a few shots in the way HDR does. As I understand it at this point, this opens up many more possibilities for how that file is compressed (and without ANY data loss) and how the noise can be eliminated from shadows and so on. Theoretically it opens up a possibility for much further dynamic range and bit depth than HDR could ever do, and a possibility for far greater compression of file size without any quality loss (another concept we are not familiar with).

My take is that in Phase One's attempt to over-simplify what FA actually does it was lost in translation and often understood as the closest cousin we already knew; HDR. As we know now its far more complex and versatile. Thank you to the folks who have posted the testing, the proof is in the results whether or not we understand the science behind it.

Robert

Yes, and we also received the same information regarding the bit depth origins (this time via memo). I have to admit, it hurt my head a bit to read through the entire explanation we received. But essentially, in the case of Auto Frame Averaging as a feature, it appears that the messaging from Phase One was inappropriate in the wrong direction. Instead of over-hyping, they essentially under-hyped, because terming it Auto Frame Averaging with no context into how it was being done allowed end users and dealers to think of it in the traditional sense that it has been addressed in post. While the reality is they have created a tool with much more potential (some of this was shared with us, but not many details), but as a result, much more complexity. Hence the extended time between the Beta Labs Frae Averaging tease of last year and now. Someone may have realized - "Hey we can do this" before realizing what doing this would open up for them with the approach they were taking.

But the bit depth at the raw capture stage - before it even becomes a file - is not a fixed bit depth value, but rather a varying value. From the way I understand it, I think of it as a collection of data of varying quality, rather than a collection of files with set finalized quality aspects. And from that, a lot of math has to be done to get the end result where it is right now, which is then assigned as a 14 bit file (even though it easily exceeds the quality of single capture 16 bit files). For now, this is what they can do, and the 14 bit assigments creates a lot of efficiency. But know that down the line, things apparently are going to get even better.

Geez, I hope I understand this correctly. I fed back what I have written here to Phase One and did not get any head shakes, so... 🤞


Now, about those other things on the existing request list ...


Steve Hendrix/CI
 

kdphotography

Well-known member
It's nice that the Mothership in Denmark finally did issue a memo to its dealership client support structure to clarify Frame Averaging (AFA).

It sure would be nice to see a few more memos from the Mothership issued to its dealers and more effectively communicate to clients the progress made, timelines, etc. on firmware/feature updates on the IQ4 platform----you know, that MFDB that is supposed to be the flagship and pinnacle of medium format excellence. :rolleyes:

C'mon, Phase One. Make Dante proud. ;)
 

RLB

Member
It's nice that the Mothership in Denmark finally did issue a memo to its dealership client support structure to clarify Frame Averaging (AFA).

It sure would be nice to see a few more memos from the Mothership issued to its dealers and more effectively communicate to clients the progress made, timelines, etc. on firmware/feature updates on the IQ4 platform----you know, that MFDB that is supposed to be the flagship and pinnacle of medium format excellence. :rolleyes:

C'mon, Phase One. Make Dante proud. ;)

As a long time Phase shooter (20+ years) and very early IQ4 adopter I'm as excited and anxious as anyone about the future feature set updates. While I don't want to make excuses for the lack of communication from the mothership, I think we all need to consider just how much of a departure the IQ4 is from previous models. Once we got into CMOS every back from the 50 to the 100 and the 1-2-3 series were pretty much the same back with a different sensor. While the IQ4 also has a different sensor, the on board Linux, processing potential and FW are in my assessment vastly more sophisticated than previous generations, the only thing remaining from the previous models, the housing (and of course now with a few more ports). While the IQ100 is a fantastic back the potential for the IQ4 to be an almost end be all product exist - hence "Infinity Platform". After shooting with the IQ4 the past six months I'm floored by the file quality, focus peaking, lack of color fringing/shift (when shifting + stitching). Sure its no where near perfect yet, but I expected that when I bought in. As a Tech camera shooter the things the IQ4 allowed me to do where so far beyond the IQ180, which in its own right was a pretty great back inside the CCD limitations. I think everyone is in agreement that more concise and frequent are a positive thing. I also think that Phase could benefit from from casting a wider net with FW Beta testers.

Robert
 

Wayne Fox

Workshop Member
To me there is one thing I thought I would mention regarding the 16bit/14bit frame averaging. this isn’t new and in fact Doug clarified this earlier, but it’s easy to not be aware of this.

Even though the frame average file is only 14 bit, the setting still affects the way the frame averaged shots are captured. To maximize your flexibility with frame averaging, you might need to manually to set the format to 14 bit, otherwise the resulting individual exposure times must be .5 seconds or longer to get continuous exposure. If you set the camera to 14 bit, this allows individual exposure times at .25 seconds to maintain continuous exposure.

Might not be too important, but it would mean one less stop of neutral density or the ability to open up a stop.

I would assume if using frame averaging on still images to reduce noise this probably wouldn’t pose any problems, but when trying to blur moving water as smoothly as possible, getting to continuous is probably ideal.
 

GrahamWelland

Subscriber & Workshop Member
Btw as a Linux developer, and 30+ years of dev experience, why is it so difficult for Phase One to provide ad-hoc Wi-Fi support? I can do it on anything as feeble as an atom based board ...

Maybe Phase One need to hire some decent Wi-Fi engineers, or perhaps there is a fundamental flaw in their hardware. It sucked in every previous generation btw.
 

Christopher

Active member
Worked perfectly fine on my IQ3100.

My wild guess it won’t come to the IQ4...


Btw as a Linux developer, and 30+ years of dev experience, why is it so difficult for Phase One to provide ad-hoc Wi-Fi support? I can do it on anything as feeble as an atom based board ...

Maybe Phase One need to hire some decent Wi-Fi engineers, or perhaps there is a fundamental flaw in their hardware. It sucked in every previous generation btw.
 

GrahamWelland

Subscriber & Workshop Member
Ok, to be fair, I completely gave up with my IQ260 and never tried that much with my IQ3 100 to the same degree. I may be being unfair on that camera but twice bitten ...

Also, it’s not like we really need it to work 30m away. More like 30-60cm.
 

Christopher

Active member
Honestly, for remote release and camera settings Bluetooth would have been even better. It’s amazing on my GFX100. However, in the end as Box WiFi is really missing for me and the most important feature as I use it 70%of the time on my Iq3100.

I’m also shocked that there still is no new version of capture pilot.


Ok, to be fair, I completely gave up with my IQ260 and never tried that much with my IQ3 100 to the same degree. I may be being unfair on that camera but twice bitten ...

Also, it’s not like we really need it to work 30m away. More like 30-60cm.
 

Paul2660

Well-known member
Ok, to be fair, I completely gave up with my IQ260 and never tried that much with my IQ3 100 to the same degree. I may be being unfair on that camera but twice bitten ...

Also, it’s not like we really need it to work 30m away. More like 30-60cm.
Same here. But one day I tried WiFi on the 3100 and was surprised. Damn it worked. Nothing like the crappy response from 260.

Started using Capture Pilot again especially in situations where is I wanted the effect on a tilting screen. Used it more for focus checking with Live View.

Was surprised to see it was left out on IQ4. 8 months now.

Surprised too that you still can’t view image playback on an external HDMI monitor. Live view is better since you can move around now. Just would live to get to use a 1000 bit screen in daylight reviews.

Paul C
 

vjbelle

Well-known member
I agree that WIFI worked well with the 3100. I couldn't have taken some of my images in Lofoten without using capture pilot and my iPhone..... Having just received my 4150 I haven't experienced any of the wait but the clock has started for me.

Victor
 
Top