The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Strange moire-like pattern - assistance requested

Ed Hurst

Well-known member
G'day Balt,

Thanks for your kind compliment - and also for your very interesting insights to the way in which these things tend to work.

I am not in the least surprised that small movements make a difference, or am not very surprised that the lens corrections are the culprit given that they change the geometry (and other variables) of the scene. What surprised me, if these factors are at play, is that the stacked images could display without the artefact showing, then for flattening/merging to reveal the problem. This was because I was not previously aware that what one sees on screen before flattening/merging does not fully represent the full file once flattened/merged. So the appearance of the artefact at that stage was what confused me (the image having, I wrongly assumed, appeared in its full splendour without the artefact prior to flattening/merging). Now that I know that flattening/merging actually involves performing various algorithms, and that the preview prior to that is not 'complete', it all makes eminent sense to me. I feel like I have learnt something!

Would love to chat about Sydney photography and astrophotography some time - perhaps in a Sydney pub ;-)

Warmest regards,

Ed


P.S. To everyone who suggested other solutions that I have not tried yet, I will still give them a go - they might suggest something useful for other reasons!
 

bindermuehle

New member
Hi Ed,

no worries, and yes, we should go have a beer someday and have a bit of a chat about things photography. As you might have seen in my other post, I'm looking into (likely) buying a CFV50c to get my Hassy setup (and my mojo) working again. Some others have suggested I look into a 645, you seem to be working with one? Perhaps you can tell me a little about your equipment?

Another thing you stumbled over is the representation of your image, 8bit or 16bit makes a huge difference. Photoshop to my knowledge in preview only shows you 8bit. Also, when you're stacking lots of images, keep in mind that with 8bits you run out of dynamic range in the stacked image immediately, 16bit is not much better. Ideally you stack into a 24bit or even 32bit format, then use HDR techniques to "recover" i.e. bring into focus those parts of the dynamic range you're interested in.

Remember this: In unsigned integer world (which is what pixels usually are represented as), 8bit = 256 levels, 16bit = 64k levels, 24bit = 16M levels and 32bit = 4G levels. What does that mean? When you combine 100 frames into an 8bit image, the finest gradation you get between pixels can be 1/256th of the full dynamic range in the image. You can see how that gets much finer the more bits per pixel you have to represent the numbers: 32bpp has 16 million times the dynamic range of an 8bpp image.

But then again you probably knew all that, don't mean to be a smartarse... :)

Cheers

= Balt
 

Ed Hurst

Well-known member
I am broadly aware of the implications of 8 versus 16 versus 24 versus 32 bit, but had not really thought about doing this stacking with 24 or 32 bit files. Are you suggesting that I create my files from raw at those bit levels, do the stacking, flatten, then use tone mapping to get back to the 16 or 8 bit file? Not being a smartarse at all - except in the best sense of genuinely being smart ;-)

I will PM you and we can organise a meet up somewhere... We can talk cameras. I can bring along my 645Z if you wish. I am sure I can learn a lot from you, especially in the area of astrophotography!

Ed
 

bindermuehle

New member
Hi Ed,

both amateur and professional astrophotographers try to stay in the most bits per pixel yielding format for as long as possible. Every time you perform a mathematical operation on an image (and all the processes you're talking about are just that), you are limiting yourself to the "number space" available by how many bits you get to represent each pixel.

Say if you're just linearly adding two pictures together, if you're using 8bit (256 levels), and say the same pixel has a value of 130 in one picture, and 140 in the other, the total is 270. That's higher than the maximum number (256), therefore will have "saturated".

Cheers

- Balt
 

Ed Hurst

Well-known member
For that reason, I always do everything in 16-bit and then only create an 8 bit version for web use at the very end of my workflow. That's always seemed fine for single shots and panos. However, I am intrigued to hear that there might be benefits to a higher bit level with star trail stacks. Definitely something I will try...

Thanks!

Ed
 

Ed Hurst

Well-known member
Hmm... ACR seems only to allow for raw conversion to 8 or 16 bit. Are you suggesting that I need to do raw conversion at a higher bit level or that I can take the 16 bit files and convert them to, say, 32 bit individually (which would not add data to any single file, of course) before doing the stacking?
 
Say if you're just linearly adding two pictures together, if you're using 8bit (256 levels), and say the same pixel has a value of 130 in one picture, and 140 in the other, the total is 270. That's higher than the maximum number (256), therefore will have "saturated".

Cheers

- Balt
The RAW file of the Pentax 645Z is 14-bit (16384 levels), whereas the RAW files of the IQ250/CFV-50C are 16-bit (65536 levels).
 

Ed Hurst

Well-known member
But is that significant in terms of the specific issue being discussed? We are talking about using bit levels higher than any sensor produces - not because the individual files contain those bits of data but because the stacking process might benefit from them (when the differences between files may be most effectively represented with more bits)... Or perhaps I misunderstand?

In any case, I am yet to be convinced that the difference you refer to is actually real in terms of the final output file. But very happy to learn :)
 

ErikKaffehr

Well-known member
Hi,

IQ 250 is 14 bit according to Phase One.

The raw format used by Phase One has been reverse engineered by Anders Torger, and it is 14 bits. So that 16-bit is 99.99% a marketing lie bye Phase One.

With Hasselblad it is a bit different, their raw format actually stores 16 bits, but 3-4 of those bits are just garbage. It is feasible that the IQ-250 sensor can deliver actual 14, it is probably pretty close to sensors used in Nikon D810 and D750. Both these sensors have around 13.7 EV DR.

Now, the definition of DR is based on a signal noise ratio of 1, which would not be usable so real world DR may be say 11EV corresponding to 11 bits.

Just keep in mind, bits and EV are essentially the same. When Phase One says that a sensor has a DR of 13EV it means it the sensor signal can be accurately represented by 13 bits and also that the last bit represents 50% noise and 50% signal.

Now, there is a natural explanation for those 16 bits. Digital devices normally are either 8 or 16 bit wide. So, if you use 16 bit components the digital channel will be 16-bit wide, that doesn't mean it will pass 16 bit clean information.

Another way to see it, 16 bits correspond to 96 dB, 14 bits to 84 dB and 12 bits to 72 dB. If you check the Dalsa spec sheet of the FTF9168C 60 MP sensor (the one probably used in some 60 MP backs) it says that typical dynamic range is 73 dB, with the linear part being 70 dB. So it is essentially a 12 bit device. That is 12 bits are sufficient to hold all meaningful data and any other bits represent noise.

DxO has not measured the IQ-260 but they have measured the IQ-180, and it had a DR of 11.89. Now, DxO mark normally normalizes DR to 8 MP and that value is 13.56 EV. 1.66 EV is coming from normalisation ln(sqrt(10))/ln 2.

So the IQ-180 is not really sixteen bits, not even fourteen but just twelve bits.



Best regards
Erik



The RAW file of the Pentax 645Z is 14-bit (16384 levels), whereas the RAW files of the IQ250/CFV-50C are 16-bit (65536 levels).
 

ErikKaffehr

Well-known member
Hi,

Stacking improves noise levels. Quadrupling the information gives an extra EV in dynamic range, corresponding to one EV.

By the way, my guess is that the banding you observed was coming from the lens correction being applied to to the image but not to the dark frame shots. But that was just a guess.

Best regards
Erik

But is that significant in terms of the specific issue being discussed? We are talking about using bit levels higher than any sensor produces - not because the individual files contain those bits of data but because the stacking process might benefit from them (when the differences between files may be most effectively represented with more bits)... Or perhaps I misunderstand?

In any case, I am yet to be convinced that the difference you refer to is actually real in terms of the final output file. But very happy to learn :)
 

Ed Hurst

Well-known member
Hi,

Stacking improves noise levels. Quadrupling the information gives an extra EV in dynamic range, corresponding to one EV.
So does this mean that the above idea of converting the 16bit TIFFs to large bit levels before stacking would be beneficial (even though it's not adding any extra data to the individual files)?

By the way, there was no dark frame used with any of the shots posted in this thread...

Thanks all,

Ed
 
There is no longer any official document from Phase One saying that the IQ250 is 14-bit. If you check with Raw Digger or other software you can see that the RAW file of the IQ250 is 16-bit (65536 levels):



The raw format used by Phase One has been reverse engineered by Anders Torger, and it is 14 bits. So that 16-bit is 99.99% a marketing lie bye Phase One.

With Hasselblad it is a bit different, their raw format actually stores 16 bits, but 3-4 of those bits are just garbage. It is feasible that the IQ-250 sensor can deliver actual 14, it is probably pretty close to sensors used in Nikon D810 and D750. Both these sensors have around 13.7 EV DR.
I need to see hard evidence of this claim. I wish he could post part of the source code so I can verify it with my own code. I agree that the 16-bit (65536 levels) in the IQ250 RAW file might be due to interpolation from 14-bit (16384 levels) but there are talkings about the ADC used in the Sony sensor being 14-bit inside and there is no way Hasselblad can provide "true 16-bit" either.

Now, the definition of DR is based on a signal noise ratio of 1, which would not be usable so real world DR may be say 11EV corresponding to 11 bits.

Just keep in mind, bits and EV are essentially the same. When Phase One says that a sensor has a DR of 13EV it means it the sensor signal can be accurately represented by 13 bits and also that the last bit represents 50% noise and 50% signal.

Now, there is a natural explanation for those 16 bits. Digital devices normally are either 8 or 16 bit wide. So, if you use 16 bit components the digital channel will be 16-bit wide, that doesn't mean it will pass 16 bit clean information.

Another way to see it, 16 bits correspond to 96 dB, 14 bits to 84 dB and 12 bits to 72 dB. If you check the Dalsa spec sheet of the FTF9168C 60 MP sensor (the one probably used in some 60 MP backs) it says that typical dynamic range is 73 dB, with the linear part being 70 dB. So it is essentially a 12 bit device. That is 12 bits are sufficient to hold all meaningful data and any other bits represent noise.

DxO has not measured the IQ-260 but they have measured the IQ-180, and it had a DR of 11.89. Now, DxO mark normally normalizes DR to 8 MP and that value is 13.56 EV. 1.66 EV is coming from normalisation ln(sqrt(10))/ln 2.

So the IQ-180 is not really sixteen bits, not even fourteen but just twelve bits.

Best regards
Erik
For this part I agree - the IQ260 just stores garbage in the lower bits, especially in long exposure mode, where the picture must be taken at ISO 140 (actually at ISO 200 native) and the shadow is as noisy as Canon, which is fairly pointless for landscape shots.
 

ErikKaffehr

Well-known member
Hi,

Anders (Torger) was very specific about the Phase One raw format being 14 bits. He was developing code to write IIQ files from HDR conversion, so I am pretty sure he knows what he does. I could find his writing but he doesn't give coding details.

On the other hand, it is quite possible to pass say 16 bit data if data is coded non linearly. Sony is doing this on most cameras. They seem to send 13 bits worth of data trough an 11 bit wide channel. Lots of noise about that. Sony also has a delta compression that can cause artefacts.

My issue is mostly that MFD people use the 16 bits as a sales argument, although it is irrelevant.

The issue of 16-bitness is of course quite irrelevant in the original context. Sorry for the deviation!

Best regards
Erik
There is no longer any official document from Phase One saying that the IQ250 is 14-bit. If you check with Raw Digger or other software you can see that the RAW file of the IQ250 is 16-bit (65536 levels):


I need to see hard evidence of this claim. I wish he could post part of the source code so I can verify it with my own code. I agree that the 16-bit (65536 levels) in the IQ250 RAW file might be due to interpolation from 14-bit (16384 levels) but there are talkings about the ADC used in the Sony sensor being 14-bit inside and there is no way Hasselblad can provide "true 16-bit" either.


For this part I agree - the IQ260 just stores garbage in the lower bits, especially in long exposure mode, where the picture must be taken at ISO 140 (actually at ISO 200 native) and the shadow is as noisy as Canon, which is fairly pointless for landscape shots.
 
Last edited:
Hi,

Anders (Torger) was very specific about the Phase One raw format being 14 bits. He was developing code to write IIQ files from HDR conversion, so I am pretty sure he knows what he does. I could find his writing but he doesn't give coding details.
I have seen his writing but that doesn't change the fact that whatever software you use (aside from his software as he claimed) you get 16-bit (65536 levels) from an IQ250 RAW file, whereas you only get 14-bit (16384 levels) from a 645Z / Nikon / Canon RAW file.
On the other hand, it is quite possible to pass say 16 bit data if data is coded non linearly. Sony is doing this on most cameras. They seem to send 13 bits worth of data trough an 11 bit wide channel. Lots of noise about that. Sony also has a delta compression that can cause artefacts.
I am quite aware of this Sony issue, and to be more precise, it is a general issue for all electronic viewfinder models of all current Sony camera bodies. The actual level precision I have tested is as follows:

a) 10.7-bit (around 1700 levels) for normal settings:

no special setting;
electronic front curtain.

b) 10.4-bit (around 1400 levels) for any of the following settings:

silent shooting (electronic shutter) for A7S;
long exposure noise reduction mode;
B mode (verified at both 24 seconds and 38 seconds);
continuous shooting mode (verified at both the 1st and the 2nd frame);
speed priority cont. shooting mode (verified at both the 1st and the 2nd frame).

Such lossy compression does not only cause artifact on edges of high contrast (like star trails) but also cripple the shadow recoverability (color precision issue for demosaicing).
My issue is mostly that MFD people use the 16 bits as a sales argument, although it is irrelevant.

The issue of 16-bitness is of course quite irrelevant in the original context. Sorry for the deviation!

Best regards
Erik
Then why did you specifically target Phase One's (removed) official statement of the IQ250 being only 14-bit? I agree that 16-bit is purely hype, especially meaningless for the CCD sensors (e.g. IQ280/IQ260/H5D-60 etc) as the SNR in the shadow is very poor (as poor as at Canon level), but you still cannot persuade me that the IQ250 is not 16-bit (65536 levels), albeit which might have been interpolated from 14-bit (16384 levels). If the interpolation was true, then this would be true for IQ260/IQ280 as well, and it essentially makes no difference for Hasselblad as they do not have a sensor that can saturate a whole 14 stops of dynamic range at pixel level either.
 
Last edited:

ErikKaffehr

Well-known member
Hi,

I was objecting to this statement of yours:

"The RAW file of the Pentax 645Z is 14-bit (16384 levels), whereas the RAW files of the IQ250/CFV-50C are 16-bit (65536 levels)."

I was looking at this info from Phase One


.

This is low bits from an IQ-150 image in Raw Digger:


You see that 3 out of four slots are empty.

Turning 14 bits to 16 bits is not interpolation but logical shift two. Interesting though that the file contains more than 16000 different values.

But, you need also regard that the higher levels have gaussian distribution, with sigma being the square root of the signal. This is due to shot noise. So say a level of 1000 will be like a bell shape with standard of deviation 100, so a difference between say 1000 and 1005 will be meaningless.

Best regards
Erik

Then why did you specifically target Phase One's (removed) official statement of the IQ250 being only 14-bit? I agree that 16-bit is purely hype, especially meaningless for the CCD sensors (e.g. IQ280/IQ260/H5D-60 etc) as the SNR in the shadow is very poor (as poor as at Canon level), but you still cannot persuade me that the IQ250 is not 16-bit (65536 levels), albeit which might have been interpolated from 14-bit (16384 levels). If the interpolation was true, then this would be true for IQ260/IQ280 as well, and it essentially makes no difference for Hasselblad as they do not have a sensor that can saturate a whole 14 stops of dynamic range at pixel level either.
 
Last edited:
Hi,

I was objecting to this statement of yours:

"The RAW file of the Pentax 645Z is 14-bit (16384 levels), whereas the RAW files of the IQ250/CFV-50C are 16-bit (65536 levels)."

I was looking at this info from Phase One
So, did you manage to find where Phase One officially states 14-bit (on the current webpage or document)? :D

This is low bits from an IQ-150 image in Raw Digger:


You see that 3 out of four slots are empty.

Turning 14 bits to 16 bits is not interpolation but logical shift two. Interesting though that the file contains more than 16000 different values.

But, you need also regard that the higher levels have gaussian distribution, with sigma being the square root of the signal. This is due to shot noise. So say a level of 1000 will be like a bell shape with standard of deviation 100, so a difference between say 1000 and 1005 will be meaningless.

Best regards
Erik
Raw Digger cannot fully support the IIQ format. I have tested the following 3 cases on a single dark frame shot of an IQ250:

a) Open the RAW file directly with Raw Digger: there are gaps between levels;

b) Convert the RAW file into DNG with Adobe Camera Raw and then open it with Raw Digger: there are no gaps but the standard deviation of noise is very high (and hence lower performance of usable dynamic range);

c) Convert the RAW file into DNG with Capture One and then open it with Raw Digger: there are no gaps and the standard deviation of noise is low (and hence better performance of usable dynamic range).







I bet Phase One knows how to optimize and cook the RAW files better than Raw Digger, Adobe or Anders Torger does.
 
Top