MGrayson
Subscriber and Workshop Member
Don't take "technical" explanations too seriously. After the trichromatic howler of disjoint transmission curves for the color array (under the technical explanation section), I don't think we have to debate before vs. after A/D conversion. The idea of an analog computer at each pixel doing the averaging? Sounds wrong to me, but I'm not a chip designer. I thought A/D was done on-chip and only a digital signal read off of it. In which case, averaging on the CPU makes a lot more sense, as all the circuitry to do that is already in place, and it's just software.
Here's a question: To get smooth water, you can take a long exposure, or average many short exposures. How much effect do gaps between the shorter exposures have? Star trails are badly affected by gaps, but averaging waves need not be. Motion being (mostly) continuous, a long exposure will do better, but by how much? Suppose I take 100 1/10 second exposures one second apart? How does that compare to a 10 second exposure? 100 seconds?
I should just do the experiment....
Here's a question: To get smooth water, you can take a long exposure, or average many short exposures. How much effect do gaps between the shorter exposures have? Star trails are badly affected by gaps, but averaging waves need not be. Motion being (mostly) continuous, a long exposure will do better, but by how much? Suppose I take 100 1/10 second exposures one second apart? How does that compare to a 10 second exposure? 100 seconds?
I should just do the experiment....