When the IQ250 tech cam wide angle demo images came out the issue with sensor color crosstalk started to interest me.
A problem with (most) current image sensors is that pixels are deep, about 7um from the microlens on the surface to the light registrating photodiode in the bottom, and pixels are only 5-6um wide. Ie deeper than wide. This means that you will get issues with technical wide angles as the deliver light at a low angle, meaning that some of the light (photons) may not reach the bottom but hit and get absorbed by the pixel walls -- if there are any. In actuality there are no walls covering the whole path down, and instead light can jump over to the next pixel and you get pixel crosstalk:
Red gets registered as green and vice versa. The visible result of this in a real image is typically desaturation and possibly slight to severe color shifts, as all three color primaries are mixed. Due to wiring on the sensor and slight non-uniformity of pixels the crosstalk is not a simple function of the angle of incoming light, it does increase with angle, but maybe a lot more horizontally than vertically (due to wiring) and variation in one direction due to non-uniformity of pixels. With the IQ250 sensor is seems to be like with the A7r sensor that microlenses (or actually the photodiodes) are offset towards the edge to get better angular response, with the disadvantage of strange non-uniform crosstalk behavior when you push it.
Sensors are not intended to be used in crosstalk mode, ie you should not give wide angle lenses to them that feed them with too low angle. But what if you do anyway?
A normal LCC procedure will correct non-uniformity in pixel vingetting, ie color cast that occur due to variations in the pixel light shields/walls but will not correct crosstalk.
I started to experiment if it would be possible to extend normal LCC processing with crosstalk cancellation. I started by setting up a mathematical model of how crosstalk flows between pixels and then reversing it. Unfortunately this model has much too many unknown variables that you won't get from the ordinary white LCC shot. By extending the workflow with shooting one more LCC, one with a red filter (a Wratten 25 gel taped to your LCC card will do) I got some more information of the sensor's crosstalk behavior into my model that then started to produce useful results.
I present some preliminary results here. I don't have access to any exotic equipment like the IQ250 and 32HR, so I made an experiment with the Aptus 75 (7.2um Dalsa CCD) and an SK35XL. This Dalsa sensor is quite good at supressing crosstalk vertically, but not horizontally. With the sensor shifted in landscape mode effects of crosstalk is seen. There are not huuuge degradations to color, but they can be noticed.
In the experimental setup I have shot the same color checker under the same light close to the center of the lens and then at the image circle border (90mm diameter). I made a normal LCC shot and then a red-filter LCC. The normal is used to correct the color cast, the red-filtered LCC to figure out crosstalk and cancel that. To make a better measurement of the crosstalk you would need green and blue filtered LCC shots too, but the symmetry assumptions does not seem to be a too large error source compared to others, the algorithm can also use the white LCC for "basic" information. If the algorithm would require say four LCC shots it would be just too cumbersome to use anyway.
In the resulting images you should see desaturation in the image that has only been LCC corrected, which then is restored when crosstalk is cancelled. The precision in color reconstruction is significantly less than normal color cast correction though (due to the algorithm does not have complete information) so I cannot say it's a great idea to push a sensor into crosstalk on a regular basis, but if you have to do it on occasion a crosstalk cancellation algorithm can be useful.
I see this crosstalk cancellation algorithm could be useful for some particular senor/lens combinations; IQ250 and virtually any tech wide, possibly Sony A7r on Arca-Swiss MF-two / Rollei X-Act etc and wides, and even older backs with "extreme" lenses like the SK35XL with larger shifts. I assume the 6um dalsa sensors (P65+, IQ160/260 etc) have a little bit more crosstalk than the larger 7.2um pixels of my back, I know some have noted desturation issues with the SK35XL on them. If you know you shift settings you can shoot a new red-filtered LCC and re-process old files. With manual photoshop work you can achieve similar results, but it's hard to do right as the crosstalk is not necessarily uniform or a purely circular as discussed above. This process is 100% automatic.
However, this is preliminary work. I still have some issues and I still don't know if it's going to be a stable enough algorithm, ie if it will actually work in more situations than just my first experimental setup. If it turns out well it will be included in a future release of Lumariver HDR (you can use it for only raw-in-raw-out LCC work even if it has tonemapping). I'll let you know.