I think it is important to distinguish between calibrations inherent to the sensor, and those which arise from the particular optical configuration ahead of the sensor. In the first category you have the CCD segments equalization you referred to...basically offsets subtraction and gain map division. Dark frame subtraction is another example. In the second category you have the lens cast and LCC issue. To me, a file is still 'raw' and 'original' if it has only had the first category of corrections applied in-camera...it is as raw as you can get from that camera anyway; not pure raw like Canon but tweaked raw like Nikon. Obviously de-Bayering comes later and its output is no longer raw.
I too work in scientific imaging, and the policy or philosophy is to separately keep every individual frame or exposure which is read off the camera, both data and calibration frames.
Not that I'd be entirely opposed to keeping only the LCC-corrected versions. Correct me if I'm wrong, but surely LCC correction happens before de-Bayering and the result can be saved in original Bayer format? In that case, there is still the freedom to do whatever raw processing you please, and later re-processing, on the LCC-corrected file. And because each LCC is unique to that setup at that time, because of dust etc., it's not like you can later make a 'better' LCC and apply it to the original raw file.
Ray
Ray, excuse the long response but I love discussing scientific imaging. Everyone else can probably take this as being off topic and tune out.
I think the stringency with which we keep every image file depends on the intended use. Here I would contrast the qualitative and quantitative forms of imaging. If we are doing quantitative imaging, yes, keep everything and never touch the original image data. We even used an "audit trail" function that let you highlight a data value and then see the image location where it originated. I would suggest that this type of rigor is not relevant here, because photography is a form of qualitative imaging.
Because photographs are made to be viewed, not measured, one can give the cast correction function the freedom to change image data. Of course, the result must look good when we view it. That is easier if image acquisition conditions are tightly controlled when a microscope or telescope is the image forming device. Then we can trust that the LCC is a good model of the image acquisition condition.
Photographic imaging conditions are not as well controlled. As a result, the LCC is often not a good model of the original acquisition conditions. The situation is made worse because photographers' highly trained visual systems become so very sensitive to slight magenta casts, vertical folds, etc. Therefore, artifacts that are a tiny proportion of the data range may be visible. It takes a pretty good LCC to correct these. Even then, the process is to apply it, see if it works and, if it doesn't, post-process until the final result is acceptable. In other words, the LCC is only part of the workflow by which the final image is made to look "good". That's why I don't keep mine.
I really like the idea of modeling LCCs for various acquisition conditions, as other responders have suggested. Of course, the models would not be perfect but neither are the real LLCs. With models, you would still apply an LCC, see if it works, and then post-process until acceptable. The difference is that we would not need to make actual LCC images - wonderful.
Sadly, the modeling process would involve some fairly challenging optical engineering and programming. With any luck, discussions such as this will encourage Phase to allocate enough resources to do the job.
Peter