The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Fuji GFX 100

Are photon transfer curves available for the major cameras so you can compare them? I've never seen one referred to, but then I've never gone looking for them.
 

JimKasson

Well-known member
This whole issue with bits and gradations is a very easy experiment to conduct for the human visual system.

I did something like this when I was selecting monitors.

Make a photoshop file in a rectangular format that is 256 pixels long.

Create a black to white gradient with the start at end points at each end of the rectangle.

You will get 1 linear pixel length per step, and see if your eyes can pick it apart or get banding.

I have tried this and I could see banding in 256 steps.

<snip>
The results of this experiment depends on the brightness of the image. As the image gets brighter, the number of steps that can be seen discreetly increases. It is also affected by the tone curve of the RGB color space in which the experiment is conducted. If you simulate photon noise, the number of steps deceases. The color of the ramp affects the results.

Of course, it also depends on the precision of the signal sent to the monitor, and whether monitor calibration is performed at the input precision or not. Best to do this test with a 10-bit monitor and a 10-bit data stream. If the program putting up the image performs dithering, that will invalidate the results.

I once saw an SPIE paper presented that found that, with sRGB at a 100 cd/m2 white point, the typical number of (linear in the gamma-corrected space) steps necessary for no visible posterization for the average of college undergraduate hired subjects was 400. I looked for it a year or two ago and couldn't find it.


Another interesting note I found is Photoshop limits its 16 bit values to 0-32,768.

So in Photoshop you are essentially limited to a 15 bit file.
That's right. It's basically 15-bit unsigned integer plus one state. They did that to get a convenient middle value.

Jim
 
Check out Bill Claff’s meticulous work:

Photons to Photos
Makes my head hurt. I am asking myself whether it worth investing the time to understand it? Am I going to use the information in deciding which new camera to purchase? For example, is lens selection more important, or will photon transfer function trump it? Or is photon transfer function more fun facts to know and tell? I guess you can't have too much information.
 
Last edited:

Shashin

Well-known member
Makes my head hurt. I am asking myself whether it worth investing the time to understand it? Am I going to use the information in deciding which new camera to purchase? For example, is lens selection more important, or will photon transfer function trump it? Or is photon transfer function more fun facts to know and tell?
This will have nothing to do with the optics. Basically, there is no point to understanding it, except for maybe in conversations like this. Most of what it is saying can be described in easier ways. It will not help in your workflow or, more to the point, you can intuitively come to the same results without that detail. For those that like the technical aspect of digital imaging, it is interesting. For the type of photography that people do here, it is not helpful for most.

Bottom line? Learn to expose for your scene and camera and know how to process that image for a pleasing result.
 

JimKasson

Well-known member
Makes my head hurt. I am asking myself whether it worth investing the time to understand it? Am I going to use the information in deciding which new camera to purchase? For example, is lens selection more important, or will photon transfer function trump it? Or is photon transfer function more fun facts to know and tell? I guess you can't have too much information.
Most cameras are good enough these days that DR is not the long pole in the tent when it comes to picking a camera. In the film days you could pick film and developers without knowing your posterior from your elbow about H&D curves, color shift with exposure, or the Zone System. Those folks that understood that stuff thought it made them better photographers, but there were plenty of top-notch photographers who didn't understand it, didn't care, didn't process their own work, and were very successful.

But you're here on the MF forum, so by your presence you've indicated that you're not totally satisfied with what you can get from FF cameras, and are at least interested in spending a lot of dough and time to move beyond that. So it may be worthwhile learning the ins and outs of digital photography at a technical level. Just like with the Zone System, there are those who get obsessed with the technical part of the process, and for them the knowledge may not be a good thing. But if you use it to make better images, it is.

Jim
 

JeRuFo

Active member
The amount of steps for a gradation to appear smooth doesn't directly relate to a certain number of steps the sensor has to capture anymore, because almost every modern sensor can capture more than enough. It's about how much you can distort the gradations in post while still maintaining a smooth appearance to the gradation. The difference between 240 and 480 fps in video playback is also virtually imperceptible, but it can still be beneficial to shoot that way, even if you only ever use it in a 25fps video. If you need to slow down the footage in post the difference can become quite obvious. It's the same with photos, if you need the ability to stretch out small portions of a gradation, you need more steps in your RAW-file.
The reason that film has long stayed popular among landscape photographers is that there is no limit to the amount of steps, so even if you shot a forest scene with only greens you could still get differentiation, whereas sensors often had trouble.
 

Wayne Fox

Workshop Member
The discussion is about the perception of the final image. The human visual system cannot distinguish between a 14-bit image and a 16-bit one.
There are a lot of steps from that 14 and 16bit capture to an image on a piece of paper, and lot of very complex calculations and manipulations going on.

is the data “different” on a 16 vs 14 bit capture? If it’s different, isn’t it possible that all of what happens between the capture and the output could make a difference in color accuracy or other factors?

After all, we’re judging this by looking at a print that isn’t printed in 14-bit or 16 bit but a de- moisaced and contrived interpretation of what was original data reconstructed and manipulated to achieve a visual image that we expect it to be. Shooting in 16bit might even be considered a type of over sampling, something often used to get more accurate results from digital to analog systems.

I might agree that differences of a simple gradient on a print derived from a 14 bit vs 16 bit image might not be resolvable by the human eye, but I think that’s an over simplification of what’s going on with a digital sensor and system.
 

ErikKaffehr

Well-known member
Thanks for the kind words. It's quite freeing to not feel I have to a) read every post on that forum, b) not correct statements that are objectively, verifiably wrong there, and not be drawn into what seem like endless, circular, acrid, logic-free, no-testing allowed, tribal discussions.



View attachment 142002

Jim
Hi Jim,

Sorry for that, I know what you mean. Your knowledge will be missed.

Best regards
Erik
 

ErikKaffehr

Well-known member
Makes my head hurt. I am asking myself whether it worth investing the time to understand it? Am I going to use the information in deciding which new camera to purchase? For example, is lens selection more important, or will photon transfer function trump it? Or is photon transfer function more fun facts to know and tell? I guess you can't have too much information.
Hi,

I would suggest that you can just check the Photographic DR curve. Keep in mind, though that the ISO axis is not calibrated. There is a lot of liberty in ISO settings.

The left side of the curve is DR at base ISO. It says how many stops below clipping there will be acceptable detail. That number is normalised, it can be compared for different pixel counts and sensor sizes.

Next you can see if there is a sudden increase on DR between two ISO steps. That indicates analogue gain increase (good) or noise reduction.

On the Sony A7rII, the camera switches to high analogue gain at 640 ISO. That means that you will get somewhat less shadow noise at 640 ISO than at 400 ISO.

You can also look at the shadow noise improvement curves. They tell you how much noise improves with increasing EV.

Modern MF cameras use Sony CMOS and they are somewhat similar but GFX100 has dual gain conversion while GFX 50 has not.

With increasing ISO you normally loose DR. The curves by BClaff help in finding a good exposure strategy.

Just as an example, assume that you are shooting handheld at an exhibition in a pretty dark church. The church has mosaic windows and you want to keep some detail.

On the Sony A7rII you can set ISO 640 and underexpose two steps. That gives you 2560 effective ISO but keeps much of the DR, thus protecting highlight detail.

The GFX 100 also has dual gain conversion, but it seems to kick in earlier. It may be that GFX 100 handles high ISOs differently from the GFX 50
Best regards
Erik
 
Last edited:

ErikKaffehr

Well-known member
Hi,

In essence, the number of usable bits is given by the engineering DR. If per pixel engineering DR is say 14 EV you need 14 bits to represent it. Prior this year, the only MFD sensor delivering more that 14 bits of DR was the 100 MP 54x41 mm sensor introduced with IQ 3100MP. Before that all Phase One backs used a fake 16 bit file format, actually using 14 bits and blowing up the data to 16 bits opening the file.

With the IQ3100 MP Phase One introduced a 16 bit file format.

16 bits may be useful on the GFX 100, it has a 16-bit readout mode, but it slows the camera down, to 2 FPS if I recall correctly.

Best regards
Erik


Not to perpetuate this aside to the title of this thread, but 16 bits vs 14 bits could be helpful when purposefully underexposing a scene with a large dynamic range, then pulling the shadows in post by several stops. I routinely shoot trees this way, and recently compared a 12- vs. 14-bit image of the same scene post-processed this way, with much better results pulling the detail in the shaded trunk without creating a sea of noise from the 14-bit image.

Whole scene (14-bit):
View attachment 142216

Trunk crop at 100% (post-processed from 14-bit):
View attachment 142218

Trunk crop at 100% (post-processed from 12-bit):
View attachment 142217

(No, that's not moss that only appears in a 12-bit workflow... it's noise which I tried (and failed) to post-process to look more moss-y :p)

Admittedly, this isn't a perfect example for the 14 vs 16-bit argument because it's 12 vs. 14, and because I post-processed each image by hand independently so the process wasn't identical. And the difference is magnified by the fact that the trunk was SO underexposed that there were probably only a handful of bits of value depth within the trunk. But it's a vivid demonstration to me of how a couple extra bits of shadow values can make a difference in the final photo.

Now I propose we return this thread to discussion of the GFX100 :)
 

Shashin

Well-known member
I might agree that differences of a simple gradient on a print derived from a 14 bit vs 16 bit image might not be resolvable by the human eye, but I think that’s an over simplification of what’s going on with a digital sensor and system.
I could not agree more. Oddly enough, that was my main critique that simply stating shoot at 16-bit is going to result in a perceptually different result than a 14-bit file was an oversimplification itself. I certainly would not suggest not using technology for its benefits. I also think the these improvements are having diminishing returns and their significance is not being understood. If someone asks me if buying a 16-bit camera is going to improve the quality of their images from the 14-bit camera they have now, I could not state it would.

I get photographers can be a little OCD and FOMO is a real problem. But I could setup a randomized blind test that would show that many of these factors are not in fact perceptible in images. While technical issues are easy to talk about--just pick the biggest number to get the better results--we never really discuss the cognitive aspects of the perception of what we make.
 
Last edited:

richardman

Well-known member
Not to... well, whatever, I love the Portland tree! So @onasj, this is Hasselblad SWC Ektar 100 (I think)



Clearly moss :). Full view



Flextight 848 scan. We now return you to the scheduled GFX100 discussion.
 

dougpeterson

Workshop Member
Not to... well, whatever, I love the Portland tree! So @onasj, this is Hasselblad SWC Ektar 100 (I think)



Clearly moss :). Full view



Flextight 848 scan. We now return you to the scheduled GFX100 discussion.
Even better, if you ever have the chance to rescan this on a modern system (e.g. DT Film Scanning Kit) I think you'll be shocked how much more there was on the film than a Flextight or drum scanner was able to extract, especially when it comes to deeply saturated colors in a shadow area. My appreciation for film and the lab coats that engineered film emulsions has definitely increased over the past few years of doing R+D on modern approaches to film scanning.
 
Last edited:

Steve Hendrix

Well-known member
There is no doubt that a 16-bit image has advantages over a 14-bit one. But what exactly is it? Car companies make Supercars that cost millions of dollars that are amazing machines. But they still will not get me to work any faster than my Toyota--there are practical limits to speed. I can also get the fastest computer available, but it will not change the efficiency of my email correspondence. And as you said, it is complicated. Engineers tend to simply focus on performance in an absolute sense without the actual implications on the final result (I know, I worked with the engineers at a camera company). And seriously, how many here will be able to take advantage of a 100MP sensor? Even with a 42" printer, these images will be idling.

Yup, you can show all kinds of advantages with quantitative data. The problem is images are finally filtered through the human visual system. That is a similar problem of putting your Supercar on a crowded road with a posted speed limit of 35mph. (Or rounding your tax returns to four decimal places.)

Lets not forget, engineers are incentivised to produce better products. If any one of them come out and simply states the current products are fine and we can stop development, they might be out of a job.

I am all for developing photographic tools, but I also think photographers need to understand what those developments mean in real terms.

Yes, and it is important to note that engineers are not marketers, and while they may do a fantastic job of extracting maximum quality, marketing does not always do a great job of showing that quality to potential purchasers (which is their job). Ultimately, this is likely because the differences can often be subtle, and subtle is not an easy sell.

The thing about the quantitative data state and the human visual state is that there is a lot in between that happens to the quantitative data that presents it to the human eye to visualize, and this takes numerous forms, ink on paper (and the myriad options and variations that entails), pixels on a display, etc. If the entirety of superior image quality (noise, dynamic range, color reproduction, rendering, etc.) can be viewed at the end result stage, then that is what matters. I'm saying that it may be too subtle for marketing to get interested in telling that difficult story.

Yes, engineers must continue to develop. The key is in how much extra quality can they extract from a 16 bit image from the same sensor vs a 14 bit image that will matter to the end user. The fact that it is a subtle difference says they have done their job well with a high quality device even at 14 bit. But at the very high end of imaging, those small differences can and should matter. In any event, given a choice, I'll choose the higher quality image, even if it is just a teensie weensie bit. I'm not interested in shooting at a faster capture rate (some may be in certain situations) or worried about optimizing the amount of hard drive space I have.

The nice thing is that anyone who has a device capable of generating both 14 bit and 16 bit files is capable of viewing the results themselves and making a choice (though that does not guarantee they will see or appreciate the differences).


Steve Hendrix
 

richardman

Well-known member
Even better, if you ever have the chance to rescan this on a modern system (e.g. DT Film Scanning Kit) I think you'll be shocked how much more there was on the film than a Flextight or drum scanner was able to extract, especially when it comes to deeply saturated colors in a shadow area. My appreciation for film and the lab coats that engineered film emulsions has definitely increased over the past few years of doing R+D on modern approaches to film scanning.
I believe it! I will save up $ for a test scan!
 

ErikKaffehr

Well-known member
Hi Erik,

It think that camera is called the GFX 100.

What improvements would you expect from the GFX 50S Mark 2?

Best regards
Erik

Very excited about the GFX100S, but I don't think this is the camera for me. Hopefully, I will manage to bide my time here and see what Fuji (and Hasselblad) does next. I'd be much more interested in a GFX50S Mark 2 right now.
 

MrSmith

Member
Nobody mentioned the new Alpa XO?

Is that because it’s just a cage and 15mm rod system similar to those made by numerous other manufacturers only not at Alpa pricing?

Don’t think any other manufacturer has dipped their toe in the water with a similar product yet though but that will happen if the GFX gets traction in the film world.
 
Top