Yes we all know about calibrated monitors etc. but if you can't see what a colour space does for you, why use it? It is adding additional information at the printing stage that you can't see on your monitor, so it's a crap shoot, you glory in it when it goes well, you go back to stage one (something closer to what you see on the monitor) when it does things you don't expect. ProPhoto is a great idea but if it isn't seen on a monitor how can it convey what you want in post processing?
Steve
Not really that at all.
Digital sensors have the potential for recording images quite outside of any "usual" gamut such a proPhoto. When those images are rendered into something like a 16 bit tiff, they are converted to the user selected colorspace according to the rendering intent. Depending on your raw converter, sometimes the intent is followed, sometimes not quite as well, but in any case, I now have an image that has a fairly small amount of compression in the color domain.
While I am editing, I usually soft proof and show areas that are outside the destination gamut and use my judgement about how I would like to manage this.
I sometimes assign (not convert) other profiles, such as one of those offered by Joseph Holmes
Joseph Holmes - RGB Working Space Profiles.
During editing the image may be manipulated in a number of ways that move color and saturation around until I get what I want.
Usually I end up with something that falls within the colorspace of my output device, sometimes I don't, but the gamut warning is useful in localizing those areas. I might select my rendering intent for output depending on the device and the image to produce the results I want.
Is it a crap shoot, well every time you push the shutter button it is to some degree, but I want as much control as possible.
Premature conversion of tristimulus values during the tiff generation to a small color space simply reduces the information that the file contains. I prefer to remove as little as possible until the end.
-bob