Don't want to get too sidetracked here ... but Jack and Lars are correct about the capture dynamic range of a digital sensor.
Yes, it is true that from a representation / storage point of view, the dynamic range and bit depth are independent. The former is the contrast (height of stair case) and the latter is the # of steps.
However ...
A digital sensor is a linear capture device, so the bit depth is therefore directly related to the maximum possible dynamic range that can be captured. Remember, a linear device means that 2x the amount of light (# of photons) captured translates to 2x the digital image value. For example, a sensor with a 12-bit ADC represents the brightest recordable pixel value at 4095. One stop below that (half the light) is ~2048. Two stops below white (1/4 of the light) is ~1024, then ~512 for 3 stops, then ~256 for 4 stops, etc. all the way till we reach the minimum representable value (the integer 1), which is 12 stops below the maximum value of 4095. So, there is no way to capture a scene of, say, 14 stops, because 14 stops below 4095 is ~0.25 (less than 1).
What many folks here are thinking of is the output dynamic range of an image (rather than the input or capture dynamic range). Certainly it is true that one can use tone curves, local dodging/burning, gamma encoding, etc. to distribute tonal values however you wish, and use however many bits of precision you want to store the results. For example, you can choose to tone-map an image into 1 stop (really low contrast!) and use 16 bits to represent it. But this is a completely separate matter from the capture dynamic range (i.e., what the sensor is capable of holding).