Guy,
Almost all modern digital cameras have a color space that is larger than sRGB and some larger than Adobe RGB.
ALL rendition of color involves some degree of compression and occasionally expansion, or perhaps lets call it remapping, of the color space to some gamut that the display device (monitor, paper, or whatever) is able to reproduce.
So lets not confuse the raw file encoding of the numerical values in raw files to perceived color saturation.
24 bit color (8bit r g b) gives us 16777216 potential colors to work with. Complicating this is that the human visual system has a color gamut (the colors we are capable of perceiving) that is brightness dependent. At very low light, in fact, we have no color vision at all. At high levels we have no color reception either since our eyes saturate very much the same way that a sensor can have a blown highlight.
Simply encoding 14 or 16 bits linearly as the data exits the ad converters does not necessarily guarantee any perceptual difference whatsoever. Now linearly here, to digital engineers, is that the next higher order bit represents twice the value as the bit below it. That means it might have (color gamut/profile remapping aside) twice the brightness.
So in conceptual terms and using an 8 bit file as an illustration, there are only two levels (on and off) at the minimum brightness level. With three colors that gives you only eight colors at minimum levels. The interesting thing is that is fine with us humans, since when things are really really dark, we can't see color anyway.
The argument in favor of some other non-linear encoding method is that at the lower half of brightness levels there are 2097152 colors. that leaves 14680064 colors available in the bright half. Now if the sensor is "stretched" or digitally amplified, and I have my suspicions about which cameras do which, by shifting all of the available sensor data right one bit, the available number of colors in the high end remains the same, but the number of levels available in the low end drops in half. This is what some perceive as a reduction in dynamic range, which is indeed what it is. Even with additional analog amplification, random noise, or worse, non-random noise, enters the low bit positions. The M8 suffered badly from non-random noise since internal camera noise which is synched with internal clocks, could actually be seen as noise patterns imposed in the low "high iso" levels. This is usually mor obvious with ccd based camra with off-chip ad converters and poor electrical noise design. CMOS based integrated converter cameras don't have this problem, but they have more inherent conversion noise due to on-chip noise coupling and variability in terms of the integrated a/d converters. A different sort of problem. Companies with limited R&D resources tend to opt for ccd sensors and low integration levels.
thanks
-bob