The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Fuji GFX 100

JimKasson

Well-known member
Yes, but you were addressing the issue of "CFA crosstalk." I haven't a clue as to how that's relevant to Fuji's claim that the smaller microlenses it used in its customized 50MP sensor for the GFX made the camera "sharper" than other cameras with the same MP count. https://fujifilm-x.com/de-de/stories/gfx-technologies-1/
I provided a link that to Brandon Dube's simulation results that showed exactly how the smaller microlenses made the images from the camera sharper by reducing the sampling aperture. The CFA crosstalk is another source of sharpness loss.

Jim
 
This opinion is based on reading the sensor spec sheet and performing calculations using Bill Claff's photon transfer curves for the GFX 100. When I get my copy of the camera, I'll perform tests aimed at finding out more about this point.
Who knew that photographers would need to refer to photon transfer curves.
 

hcubell

Well-known member
I provided a link that to Brandon Dube's simulation results that showed exactly how the smaller microlenses made the images from the camera sharper by reducing the sampling aperture. The CFA crosstalk is another source of sharpness loss.

Jim
I will take your word for what Mr. Dube's simulation demonstrates. I didn't understand any of it. I am sure he was trying to be helpful.
 

JimKasson

Well-known member
Who knew that photographers would need to refer to photon transfer curves.
Different axes, but not all that different in concept from the H&D curves that photographers have been using since the 19th century:

https://en.wikipedia.org/wiki/Sensitometry

[Added 2 hours later:

Here's something that will elucidate the similarities. Here's a quote from Hurter and Driffield: "In a theoretically perfect negative, the amounts of silver deposited in the various parts are proportional to the logarithms of the intensities of light proceeding from the corresponding parts of the object."

Here's something parallel about a PTC: "In a theoretically perfect digital capture, the rms noise in the various parts are proportional to the standard deviation of the Poisson distribution of the mean value of the electrons freed by light proceeding from the corresponding parts of the object."]

Jim
 
Last edited:

JimKasson

Well-known member
I will take your word for what Mr. Dube's simulation demonstrates. I didn't understand any of it. I am sure he was trying to be helpful.
I put it in words earlier: "If the sampling aperture is reduced, the modulation transfer function at high spatial frequencies is increased. This means the images are sharper. "

Jim
 

Steve Hendrix

Well-known member
In cameras that are based on the Sony IMX461AQR, like the GFX 100, selecting 16 bit conversion slows the sensor's max frame rate to 2.7 fps. If that is adequate, I see no reason not to use the setting. I don't see how it can hurt anything. It will make the compressed files somewhat larger, since noise does not losslessly compress well, but I don't see this as a big issue. If you need a faster frame rate, I would not be at all concerned about using 14 bit precision for that. This opinion is based on reading the sensor spec sheet and performing calculations using Bill Claff's photon transfer curves for the GFX 100. When I get my copy of the camera, I'll perform tests aimed at finding out more about this point.

Jim

Yes, this is typically the case. If you need the speed, you need the speed. In that priority, while there is a quality advantage in the result of shooting 16 bit - see, I am quantifying here ... then the 14 bit result will have to do, and from what I've seen, that quality difference is subtle. If you don't need the speed, then again, to my earlier point, I can find very few reasons to not shoot 16 bit.


Steve Hendrix/CI
 

Abstraction

Well-known member
Yes, this is typically the case. If you need the speed, you need the speed. In that priority, while there is a quality advantage in the result of shooting 16 bit - see, I am quantifying here ... then the 14 bit result will have to do, and from what I've seen, that quality difference is subtle. If you don't need the speed, then again, to my earlier point, I can find very few reasons to not shoot 16 bit.


Steve Hendrix/CI
Steve,

Would you mind posting TIFF files of 16 vs 14 bit images? I think it would make it easier for everyone to grasp the differences in the files and make the discussion more focused.
 

Steve Hendrix

Well-known member
Steve,

Would you mind posting TIFF files of 16 vs 14 bit images? I think it would make it easier for everyone to grasp the differences in the files and make the discussion more focused.

Well, I have done so in the past.

When I get a chance, I'll shoot some things and post some images.

There is a difference, and it mostly has to do with color. And why this is I don't really know (or care that much), but I do know that the difference that is visible on the Plus side of quality is when the number on the bit mode setting is set to 16.

I just feel like there is - an admittedly impressive - effort to deconstruct how some of the internal processes are shaking out, but these findings seem to occur within their own vaccuum. And I am focused on the end result. That is what the product was made for, to create a photograph at some level of quality. That the end result would reflect the capabilities of the product, and be based on the choices you've made in using it. And when you press 16 you get a better result. And at least with Phase One digital backs (let's face it, there are very few capture products that allow you to choose a 16 bit mode vs a 14 bit mode), when you do so, the image quality is superior. Not dramatically to most, but for sure it is superior in some way.

It is my belief that the engineers who design and create these products know a whole lot more about what they are doing and most importantly, why they are doing it, than anyone else who is measuring certain things after the fact. And I guess I also find it sort of insulting that in a way, their whole process is called into question in such a limited way when you really don't have the whole story. And there is almost this implication of cheating, or worse, marketing, to make something seem more than it is (not that this doesn't happen, but that's a different department). Phase One engineers did not sit there, and perhaps at the urging of marketing just make up something called a 16Bit EX mode that has absolutely no advantage over a 14 bit capture or even a normal 16 bit capture, and I don't that that Fuji did either. There are detectives here, and some very good detectives here, but when the results contradict your conclusions, then you're missing something.


Steve Hendrix/CI
 

Shashin

Well-known member
It is a mistake to compare the precision of encodings that use a tone curve with linear ones. 200 linearly-spaced levels is nowhere near enough for stepless gradient reproduction. The ADCs in cameras encode linearly.

Jim
The discussion is about the perception of the final image. The human visual system cannot distinguish between a 14-bit image and a 16-bit one. There may be other technical advantages, but simply claiming more a more pleasing image is not one of them
 

JimKasson

Well-known member
The discussion is about the perception of the final image. The human visual system cannot distinguish between a 14-bit image and a 16-bit one. There may be other technical advantages, but simply claiming more a more pleasing image is not one of them
There is a difference in the precision required in for scene-referred captures and output-referred files after editing. Some editing operations can make things not visible in the input files visible in the output files. Capture noise has the property of reducing the visibility of artifacts related to inadequate capture precision, just as noise is sometimes introduced in the editing process to reduce the visibility of artifacts created during editing.

Jim
 

Shashin

Well-known member
It is my belief that the engineers who design and create these products know a whole lot more about what they are doing and most importantly, why they are doing it, than anyone else who is measuring certain things after the fact. And I guess I also find it sort of insulting that in a way, their whole process is called into question in such a limited way when you really don't have the whole story. And there is almost this implication of cheating, or worse, marketing, to make something seem more than it is (not that this doesn't happen, but that's a different department). Phase One engineers did not sit there, and perhaps at the urging of marketing just make up something called a 16Bit EX mode that has absolutely no advantage over a 14 bit capture or even a normal 16 bit capture, and I don't that that Fuji did either. There are detectives here, and some very good detectives here, but when the results contradict your conclusions, then you're missing something.
There is no doubt that a 16-bit image has advantages over a 14-bit one. But what exactly is it? Car companies make Supercars that cost millions of dollars that are amazing machines. But they still will not get me to work any faster than my Toyota--there are practical limits to speed. I can also get the fastest computer available, but it will not change the efficiency of my email correspondence. And as you said, it is complicated. Engineers tend to simply focus on performance in an absolute sense without the actual implications on the final result (I know, I worked with the engineers at a camera company). And seriously, how many here will be able to take advantage of a 100MP sensor? Even with a 42" printer, these images will be idling.

Yup, you can show all kinds of advantages with quantitative data. The problem is images are finally filtered through the human visual system. That is a similar problem of putting your Supercar on a crowded road with a posted speed limit of 35mph. (Or rounding your tax returns to four decimal places.)

Lets not forget, engineers are incentivised to produce better products. If any one of them come out and simply states the current products are fine and we can stop development, they might be out of a job.

I am all for developing photographic tools, but I also think photographers need to understand what those developments mean in real terms.
 

Shashin

Well-known member
There is a difference in the precision required in for scene-referred captures and output-referred files after editing. Some editing operations can make things not visible in the input files visible in the output files. Capture noise has the property of reducing the visibility of artifacts related to inadequate capture precision, just as noise is sometimes introduced in the editing process to reduce the visibility of artifacts created during editing.

Jim
Jim, I really don't want to repeat what I have already posted, but I have already clarified that. But likewise, there is so much your ADC can do for photographers that can't expose. Getting into a lack of skill will make this a bottomless pit that still will not show that 16-bit has significant advantages over 14-bit.
 

Steve Hendrix

Well-known member
I see. I'm in the same boat with the GFX.

Is there a GFX 100S? I thought it was just the GFX 100?

Jim

You know, thank you for pointing that out. I feel like I saw it referred to as GFX 100S somewhere on our price sheet, but now that I look back, it just says GFX 100. Maybe I saw it somewhere else and it stuck in my noggin.


Steve Hendrix/CI
 

Leigh

New member
The difference between 14 bits and 16 bits is entirely in the shadow detail.
You won't seen any difference in the midtones and highlights.

- Leigh
 

JimKasson

Well-known member
You know, thank you for pointing that out. I feel like I saw it referred to as GFX 100S somewhere on our price sheet, but now that I look back, it just says GFX 100. Maybe I saw it somewhere else and it stuck in my noggin.
Before the camera was introduced, that used to be a common way to refer to it, just like X2D is used today. When and if Hassey announces the actual X-series IMX461 camera, they may not call it the X2D, and we'll have to recalibrate our shorthand.

Jim
 

Boinger

Active member
This whole issue with bits and gradations is a very easy experiment to conduct for the human visual system.

I did something like this when I was selecting monitors.

Make a photoshop file in a rectangular format that is 256 pixels long.

Create a black to white gradient with the start at end points at each end of the rectangle.

You will get 1 linear pixel length per step, and see if your eyes can pick it apart or get banding.

I have tried this and I could see banding in 256 steps.

This can be replicated using different sizes for the rectangle and choosing how many steps you will see.

For your reference:

8 bits: 256 steps
10 bits: 1024 steps
12 bits: 4096 steps
14 bits: 16384 steps
16 bits: 65536 steps


You can use whatever value for your respective gradient to test your own visual system.

Another interesting note I found is Photoshop limits its 16 bit values to 0-32,768.

So in Photoshop you are essentially limited to a 15 bit file.
 

onasj

Active member
Not to perpetuate this aside to the title of this thread, but 16 bits vs 14 bits could be helpful when purposefully underexposing a scene with a large dynamic range, then pulling the shadows in post by several stops. I routinely shoot trees this way, and recently compared a 12- vs. 14-bit image of the same scene post-processed this way, with much better results pulling the detail in the shaded trunk without creating a sea of noise from the 14-bit image.

Whole scene (14-bit):
Tree of Destiny 6-tiny.jpg

Trunk crop at 100% (post-processed from 14-bit):
Screen Shot 2019-06-08 at 3.22.52 PM.jpg

Trunk crop at 100% (post-processed from 12-bit):
Screen Shot 2019-06-08 at 3.23.47 PM.jpg

(No, that's not moss that only appears in a 12-bit workflow... it's noise which I tried (and failed) to post-process to look more moss-y :p)

Admittedly, this isn't a perfect example for the 14 vs 16-bit argument because it's 12 vs. 14, and because I post-processed each image by hand independently so the process wasn't identical. And the difference is magnified by the fact that the trunk was SO underexposed that there were probably only a handful of bits of value depth within the trunk. But it's a vivid demonstration to me of how a couple extra bits of shadow values can make a difference in the final photo.

Now I propose we return this thread to discussion of the GFX100 :)
 
Top