The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Fuji GFX 100

SrMphoto

Well-known member
Bill Claff (PhotonsToPhotos) posted various GFX100 measurements. One of the conclusions is that 14-bits is sufficient, there seems no need to capture RAWs in 16-bit.

While the camera is heavier, the ergonomics seem to be better than GFX50S, and EVF (a weaker point in GFX50S, IMO) is improved.
 

hcubell

Well-known member
You talk about the optimization of the pixel aperture as if there were some global optimum that the designers are trying to find. The optimum pixel aperture depends heavily on the use case and the desires of the photographer. There isn't one such optimum. All the camera manufacturers can do is provide whatever compromise they think is acceptable for most of their customers, most of the time. Until the GFX 50x, manufacturers have recently been prioritizing quantum efficiency.

Jim
I think we are talking past each other. I asked a straightforward question: Fuji used the same 50MP Sony sensor in the GFX 50S that Phase, Hasselblad and Pentax had been using, but Fuji customized the sensor by changing the microlenses, which Fuji claimed improved performance. Fuji has apparently NOT customized the Sony sensor in the GFX 100S to achieve the same set of trade offs that it achieved with the Sony 50MP sensor. Why?
 
I think we are talking past each other. I asked a straightforward question: Fuji used the same 50MP Sony sensor in the GFX 50S that Phase, Hasselblad and Pentax had been using, but Fuji customized the sensor by changing the microlenses, which Fuji claimed improved performance. Fuji has apparently NOT customized the Sony sensor in the GFX 100S to achieve the same set of trade offs that it achieved with the Sony 50MP sensor. Why?
Maybe the 100MP sensor doesn't have the same deficiencies the custom microlenses were used to address. It's the same sensor as the IQ150. I don't think anyone is complaining about its performance. Perhaps Sony learned something with its newest iteration.
 

Steve Hendrix

Well-known member
Bill Claff (PhotonsToPhotos) posted various GFX100 measurements. One of the conclusions is that 14-bits is sufficient, there seems no need to capture RAWs in 16-bit.

While the camera is heavier, the ergonomics seem to be better than GFX50S, and EVF (a weaker point in GFX50S, IMO) is improved.

"14 bit is sufficient"?

Sufficient is an interesting word. How is it concluded that 16 bits is not needed? If there is absolutely zero benefit or advantage, then no one needs it. However, I doubt that is the case.


Steve Hendrix/CI
 

JimKasson

Well-known member
I think we are talking past each other. I asked a straightforward question: Fuji used the same 50MP Sony sensor in the GFX 50S that Phase, Hasselblad and Pentax had been using, but Fuji customized the sensor by changing the microlenses, which Fuji claimed improved performance. Fuji has apparently NOT customized the Sony sensor in the GFX 100S to achieve the same set of trade offs that it achieved with the Sony 50MP sensor. Why?
Did you see what I said about BSI?

Jim
 

JimKasson

Well-known member
There's a thread here that intimates that 6MP is sufficient, and the proposition has its adherents. Sufficient is a weasel word.
It assumes a certain amount of noise dither. Some people say 0.6 LSB, and there is justification for that. The highest such criterion I've seen advanced is 1.6 LSB.

https://blog.kasson.com/the-last-word/dither-precision-and-image-detail/

https://blog.kasson.com/the-last-word/dither-and-image-detail-ahd/

https://blog.kasson.com/the-last-word/dither-and-image-detail-low-contrast/

https://blog.kasson.com/the-last-word/dither-and-image-detail-natural-scene/

Even with the most conservative assumptions about what dither is sufficient, a 15-bit ADC is enough for the GFX 100 if the noise that Bill is seeing is reflected in production cameras. Same is true for the IQ4 150 MP, but without the proviso about production cameras.



Jim
 
Last edited:

SrMphoto

Well-known member
"14 bit is sufficient"?

Sufficient is an interesting word. How is it concluded that 16 bits is not needed? If there is absolutely zero benefit or advantage, then no one needs it. However, I doubt that is the case.


Steve Hendrix/CI
Note that the comment is specific for GFX 100 not in general. The link to discussion is here:

https://www.dpreview.com/forums/thread/4399105

Excerpt:
"Read Noise is 5.2DN at 16-bit and 1.4DN at 14-bit.
So long as read noise in DN is above about 0.6DN you have sufficient bit depth; and anything above 1DN is clearly enough."
 

Steve Hendrix

Well-known member
Note that the comment is specific for GFX 100 not in general. The link to discussion is here:

https://www.dpreview.com/forums/thread/4399105

Excerpt:
"Read Noise is 5.2DN at 16-bit and 1.4DN at 14-bit.
So long as read noise in DN is above about 0.6DN you have sufficient bit depth; and anything above 1DN is clearly enough."

I think whether in general or specific to GFX 100S, I still find certain aspects of this topic interesting.

That is that the idea someone would purchase a camera and lenses costing upwards of $15,000 would be interested in sufficient or enough or in not maximizing the quality from the $15,000 camera system they purchased. Why, other than perhaps a situation where capture rate was an imperatively critical factor would you choose to shoot 14 bit? Regardless of what those charts show, there is going to be a difference in image quality when you have the camera set to 16 bit vs 14 bit, and in my experience, the results are going to favor the 16 bit capture.


Steve Hendrix/CI
 

SrMphoto

Well-known member
I think whether in general or specific to GFX 100S, I still find certain aspects of this topic interesting.

That is that the idea someone would purchase a camera and lenses costing upwards of $15,000 would be interested in sufficient or enough or in not maximizing the quality from the $15,000 camera system they purchased. Why, other than perhaps a situation where capture rate was an imperatively critical factor would you choose to shoot 14 bit? Regardless of what those charts show, there is going to be a difference in image quality when you have the camera set to 16 bit vs 14 bit, and in my experience, the results are going to favor the 16 bit capture.


Steve Hendrix/CI
"The proof of the pudding is in the eating."
Once the camera is in our hands, we'll see how the files behave.
For Nikon D850 and Z 7, it is accepted knowledge that above ISO 400 there is no gain when shooting 14- vs. 12-bit raws.
 

Steve Hendrix

Well-known member
"The proof of the pudding is in the eating."
Once the camera is in our hands, we'll see how the files behave.
For Nikon D850 and Z 7, it is accepted knowledge that above ISO 400 there is no gain when shooting 14- vs. 12-bit raws.

I'm not sure what shooting above ISO 400 has to do with what is being discussed. Also, I am sure that the 14 bit and 12 bit files shot on a Nikon - which I have never shot with - are not identical in the scenario you described. "No gain" doesn't really mean anything. Are the files identical? No. Then is there a preference for one file over another? Likely. But again, what does shooting over ISO 400 have to do with anything? We're talking about optimal quality, so let's talk about optimal quality. The best quality file you can shoot with an IQ4 150 (or an IQ3 100 for that matter) are not 14 bit files and I expect the same thing from the GFX 100S.


Steve Hendrix/CI
 

DougDolde

Well-known member
No doubt the Phase One IQ4-150 is still king but I have to wonder how many people will move to the Fuji. I'd certainly think those who are new to medium format digital would choose the Fuji unless they have some need for 150 megapixels and how many do?

For those already owning a Phase One system it gets a little more complicated depending on what back they have and what lenses they own. Selling used gear is a losing proposition
 

Shashin

Well-known member
Regardless of what those charts show, there is going to be a difference in image quality when you have the camera set to 16 bit vs 14 bit, and in my experience, the results are going to favor the 16 bit capture.
Even though the human visual system cannot perceive the difference? Sure, you can really blow the exposure and in extreme cases it might have a slight advantages, but I imagine if you are buying into one of these systems you would probably have the skill to make competent exposures.

Naturally, if you are talking about converting your data to ProPhotoRGB, then the 16-bit is going to help because ProPhotoRGB throws so much data out of an image because of its large gamut. But that is a problem with ProPhotoRGB and not the difference between 14- and 16-bit images. Except for some extreme archival situations, I would never use ProPhotoRGB.
 

Steve Hendrix

Well-known member
Even though the human visual system cannot perceive the difference? Sure, you can really blow the exposure and in extreme cases it might have a slight advantages, but I imagine if you are buying into one of these systems you would probably have the skill to make competent exposures.

Naturally, if you are talking about converting your data to ProPhotoRGB, then the 16-bit is going to help because ProPhotoRGB throws so much data out of an image because of its large gamut. But that is a problem with ProPhotoRGB and not the difference between 14- and 16-bit images. Except for some extreme archival situations, I would never use ProPhotoRGB.

You're saying the human eye cannot perceive the difference between 14 bit and 16 bit images out of the same device? At least in the 14 and 16 bit images I have viewed, there is quite an obvious difference. These arguments about 14 bit vs 16 bit and the laboratory measurements of such always seem to corner themselves into a chart that is measuring noise or dynamic range. No one ever talks about color or what the image actually looks like. I am mostly talking about color reproduction, color relationships, individual and global color tone, not noise or dynamic range.


Steve Hendrix/CI
 

Shashin

Well-known member
You're saying the human eye cannot perceive the difference between 14 bit and 16 bit images out of the same device? At least in the 14 and 16 bit images I have viewed, there is quite an obvious difference. These arguments about 14 bit vs 16 bit and the laboratory measurements of such always seem to corner themselves into a chart that is measuring noise or dynamic range. No one ever talks about color or what the image actually looks like. I am mostly talking about color reproduction, color relationships, individual and global color tone, not noise or dynamic range.


Steve Hendrix/CI
I am talking about human perception and color reproduction as well. The reason 8-bit is the minimum standard for photo quality is that to the human visual system a gradient appears stepless once you get a gradient of about 200 levels. And yes, you cannot see the difference between a 14-bit or 16-bit image out of the same device. If you can, there is something else going on (see comment above about ProPhotoRGB, for example) or it is a case of confirmation bias--we all suffer from it.
 

JimKasson

Well-known member
I am talking about human perception and color reproduction as well. The reason 8-bit is the minimum standard for photo quality is that to the human visual system a gradient appears stepless once you get a gradient of about 200 levels. And yes, you cannot see the difference between a 14-bit or 16-bit image out of the same device. If you can, there is something else going on (see comment above about ProPhotoRGB, for example) or it is a case of confirmation bias--we all suffer from it.
It is a mistake to compare the precision of encodings that use a tone curve with linear ones. 200 linearly-spaced levels is nowhere near enough for stepless gradient reproduction. The ADCs in cameras encode linearly.

Jim
 

Steve Hendrix

Well-known member
I am talking about human perception and color reproduction as well. The reason 8-bit is the minimum standard for photo quality is that to the human visual system a gradient appears stepless once you get a gradient of about 200 levels. And yes, you cannot see the difference between a 14-bit or 16-bit image out of the same device. If you can, there is something else going on (see comment above about ProPhotoRGB, for example) or it is a case of confirmation bias--we all suffer from it.

Ok, so here is what is happening in this thread - as far as my involvement is concerned. And your statement "If you can, there is something else going on" kind of crystalized it for me. We're talking about the point of shooting in 14 bit mode or 16 bit mode and whether it is worth it or not. I am arguing that in my experience it is worth it. But I'm basing this on the result because after all, the only point to using these devices is for the result, is it not. And if something else is going on that contributes to an advantage when shooting in 16 bit mode then I'm going to shoot in 16 bit mode. That's all I'm saying.

Some are discussing the relevantly narrow aspect (to the end result) of 14 bits of data vs 16 bits of data in and of itself exclusively as if that alone dictated whether it was worth shooting in the 16 bit mode. "14 bits is sufficient, 14 bits is enough". When your finger presses the number that says 16, then you are choosing better. Whether it is the exclusive result of the ADC process alone or whether that process enables other quality improvements (as I have had some engineers tell me), or whether they are adding some additional secret sauce to the 16 bit recipe is immaterial as far as a photographer standing somewhere with a camera in their hand trying to decide the best possible quality options to choose for that device in that moment.


Steve Hendrix/CI
 

dave.gt

Well-known member
Wow! You guys are way over my head.:grin::salute:

In my world, language is sometimes not sufficient to talk about what is "sufficient". That is not a pun, it is reality. For me, as irrelevant in this world as I am, I try to avoid discussions that require defining every word and explaining the context of everything about human perception and even "objective" evaluations. But, I must say that there are differences in files from the D850 vs. H5D-50c vs. Leica S vs. Phase One, and I see it all the time. I find the D850 to be frustrating as I feel it could be so much better but it falls short based on my eye and preferences. The 50c and the S are very close in rendering and providing "satisfaction" to me in personal perception as well as the experience of using them. And yes, they are three different systems with different specs.

Phase One is the pinnacle from my POV. I cannot afford it and have no need because I am not a commercial photographer. I am just a guy who uses Photography as a means of expression and sometimes to do a pro bono project.

"Sufficient" to me is different than anyone else on this forum and probably anyone else. It is curious to me why the debate is always on, I thought we passed that years ago when those who knew much more than me stated that 24 MP was all we would ever need and maybe even film would be all that we need. Clearly all of those statements were not exactly correct and even now, I do not think 150 MP or even the 14/16 bit arguments about sufficiency or desirability will stand.

So, I find my enjoyment in older systems as well as the images I see from P1 150 MP images and everything else in between. I absolutely agree with what Steve said about maximizing your results if that is a requirement/desire of the photographer, and his/her decisions make all the difference. After all, the photographer is the most important spec in my opinion. And if one makes a living with the images, competing in a tough market, then maximizing results is a huge priority... not being "good enough" which is all too common in my world.:)

That is really all I have to share and it is just my own personal viewpoint. Take it with you if you want, or better yet, take a camera and enjoy the world outside of the forum, using whatever you like. I am sure your work will be grand and I would love to see it.:thumbup:
 

JimKasson

Well-known member
We're talking about the point of shooting in 14 bit mode or 16 bit mode and whether it is worth it or not. I am arguing that in my experience it is worth it. But I'm basing this on the result because after all, the only point to using these devices is for the result, is it not. And if something else is going on that contributes to an advantage when shooting in 16 bit mode then I'm going to shoot in 16 bit mode. That's all I'm saying.
In cameras that are based on the Sony IMX461AQR, like the GFX 100, selecting 16 bit conversion slows the sensor's max frame rate to 2.7 fps. If that is adequate, I see no reason not to use the setting. I don't see how it can hurt anything. It will make the compressed files somewhat larger, since noise does not losslessly compress well, but I don't see this as a big issue. If you need a faster frame rate, I would not be at all concerned about using 14 bit precision for that. This opinion is based on reading the sensor spec sheet and performing calculations using Bill Claff's photon transfer curves for the GFX 100. When I get my copy of the camera, I'll perform tests aimed at finding out more about this point.

Jim
 
Top