Since the number of coordinate in a space is fixed, for it to cover a larger gamut, there must be a larger distance between coordinates. Colors that exist between those coordinates have to be binned up or down.
Now, if I have an image with a small gamut, the number discrete colors that can be created depend on how many coordinates in the color space fit. The larger the color space, the fewer the coordinates. If you have an image of a brick wall, ProPhotoRGB will bin those colors in the wall into fewer bins than AdobeRGB.
Now, you can actually see the effect on a histogram. Open an image in Photoshop in AdobeRGB and then covert the profile to ProPhotoRGB. Undo and redo while watching the histogram. You will see a change in the histogram area and a shift to the left for ProPhotoRGB. And as you know, when things shift to the left in a histogram, it is not ideal as there is a compression in the information, even though the appearance may not change. Now, the significance will depend on the image and the processing of the image. But the difference is there.
Digital sensors have the potential for recording images quite outside of any "usual" gamut such a proPhoto. When those images are rendered into something like a 16 bit tiff, they are converted to the user selected colorspace according to the rendering intent. Depending on your raw converter, sometimes the intent is followed, sometimes not quite as well, but in any case, I now have an image that has a fairly small amount of compression in the color domain.
While I am editing, I usually soft proof and show areas that are outside the destination gamut and use my judgement about how I would like to manage this.
I sometimes assign (not convert) other profiles, such as one of those offered by Joseph Holmes Joseph Holmes - RGB Working Space Profiles.
During editing the image may be manipulated in a number of ways that move color and saturation around until I get what I want.
Usually I end up with something that falls within the colorspace of my output device, sometimes I don't, but the gamut warning is useful in localizing those areas. I might select my rendering intent for output depending on the device and the image to produce the results I want.
Is it a crap shoot, well every time you push the shutter button it is to some degree, but I want as much control as possible.
Premature conversion of tristimulus values during the tiff generation to a small color space simply reduces the information that the file contains. I prefer to remove as little as possible until the end.
1 Member(s) liked this post
To illustrate the point I am trying to make, I took one raw file and generated two 16 bit tiffs, one in sRGB, and one ProPhoto.
I used a tool to compare the two images and to mark each pixel where there is a difference. Black means a difference, clear means no difference.
As you can see, although the differences appear subtle when viewed on-screen, there are enough differences between the 16 bit tiffs to cause concern.
2 Member(s) liked this post
IMO space of cards or disk space or processing speed are nowadays no series reasons to accept any possible compromise in IQ when you spend thousands for lenses and cameras, why save some few dollars on drives and cards?
Thats why I just witch to no compression.
However I'm always doing as much processing as possible at raw stage, export 16bit tiffs in ProphotoRGB and only do minor adjustments, retouching if necessary and color management in PS. So far I've never had any problems with ProPhotoRGB- works for me.
Regarding your example with the brick wall: Doesn't it also depend on the subject (in this case the brick wall)? If the bricks are rather desaturated it makes sense but as soon as the sun is shining for example and the bricks are well lit and saturated, in AdobeRGB there will/might be colors that are out of gamut.
Last edited by MaxKi▀ler; 16th December 2013 at 09:31.
With the M8 and M9 I tested for the difference in image content between compressed and uncompressed, and could never detect a difference.
As these cameras are rather slow in clearing their buffers and take about twice as long to do so shooting uncompressed, I decided to shoot compressed for everything except static scenes. Even when shooting uncompressed and in single shot mode I still often have to wait for the cameras. That's one of the main reasons I got the M.
With the M there is of course no point in shooting uncompressed, and the Monochrom will not allow in camera compression, so these are not part of the discussion.