ErikKaffehr
Well-known member
Hi,
I assume DNG uses Huffman coding that is lossless, need to check on that. Huffman coding is just more efficient data storage, every bit of data is kept uncompressing Huffman coded data.
Yes, it is lossless, virtually lossless is in a sense an oxymoron. On the other hand we may keep in mind that most things are approximate. Our representation of an image is approximate, the height of Mount Everest is approximate, although measured with a remarkable precision.
The number Pi cannot be represented exactly, but we can use any number of decimals to describe it, a common representation is 3.14159 a longer one can be found here: http://www.geom.uiuc.edu/~huberty/math5337/groupe/digits.html. Neither representation is exact, at least one of them may be good enough.
Raw images contain noisy data. If you would make two identical exposures the value of the very same pixel would vary quite a lot, just because of photon statistics. Say a pixel would have a value of 6000 in one exposure, next value could be 6075 and third exposure would perhaps 5925. The value would be distributed around 6000 with a standard of deviation of 77. 95% of the samples would be with two SD from 6000, that is 95% of the samples of the pixel would be between 5846 and 6144, just based on the statistical properties of light.
Best regards
Erik
I assume DNG uses Huffman coding that is lossless, need to check on that. Huffman coding is just more efficient data storage, every bit of data is kept uncompressing Huffman coded data.
Yes, it is lossless, virtually lossless is in a sense an oxymoron. On the other hand we may keep in mind that most things are approximate. Our representation of an image is approximate, the height of Mount Everest is approximate, although measured with a remarkable precision.
The number Pi cannot be represented exactly, but we can use any number of decimals to describe it, a common representation is 3.14159 a longer one can be found here: http://www.geom.uiuc.edu/~huberty/math5337/groupe/digits.html. Neither representation is exact, at least one of them may be good enough.
Raw images contain noisy data. If you would make two identical exposures the value of the very same pixel would vary quite a lot, just because of photon statistics. Say a pixel would have a value of 6000 in one exposure, next value could be 6075 and third exposure would perhaps 5925. The value would be distributed around 6000 with a standard of deviation of 77. 95% of the samples would be with two SD from 6000, that is 95% of the samples of the pixel would be between 5846 and 6144, just based on the statistical properties of light.
Best regards
Erik
Hi,
Using high ISO is essentially under exposure. The Sony A7rII is said to make use of an Aptina patent to improve SNR at 640 ISO and above, essentially by reducing full well capacity (*).
I did develop the images in DCRaw and into linear gamma space, and that didn't change the conclusion.
What I noticed is that with exposure pushed 5 EV the dark areas turn brownish in the compressed image while in the uncompressed they stay neutral. This also applies to the DCRaw conversion. I would also say a significant deviation, perhaps not visible in real world images, but very much observable in experiments.
I cannot explain the colour shift but it seems to be related to compression.
The reason I don't see artefacts on edges is probable that edge contrast is not high enough. The delta compression is lossless under a wide variety of conditions, but it will have artefacts if contrast is high enough, like on the star trails often shown.
I will check out 3200 as I still have the setup standing.
Best regards
Erik
(*) According to the said Aptina patent the photodiode is often connected to a capacitor to increase full well capacity (FWC). The voltage from the pixel is proportional to captured photons / FWC. In the Aptina patent the condenser is connected to the pixel trough a transistor that can disconnect the capacitor at a certain ISO, thus raising the output voltage from the cell.