The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Wide angle crosstalk cancellation -preliminary results

torger

Active member
When the IQ250 tech cam wide angle demo images came out the issue with sensor color crosstalk started to interest me.

A problem with (most) current image sensors is that pixels are deep, about 7um from the microlens on the surface to the light registrating photodiode in the bottom, and pixels are only 5-6um wide. Ie deeper than wide. This means that you will get issues with technical wide angles as the deliver light at a low angle, meaning that some of the light (photons) may not reach the bottom but hit and get absorbed by the pixel walls -- if there are any. In actuality there are no walls covering the whole path down, and instead light can jump over to the next pixel and you get pixel crosstalk:



Red gets registered as green and vice versa. The visible result of this in a real image is typically desaturation and possibly slight to severe color shifts, as all three color primaries are mixed. Due to wiring on the sensor and slight non-uniformity of pixels the crosstalk is not a simple function of the angle of incoming light, it does increase with angle, but maybe a lot more horizontally than vertically (due to wiring) and variation in one direction due to non-uniformity of pixels. With the IQ250 sensor is seems to be like with the A7r sensor that microlenses (or actually the photodiodes) are offset towards the edge to get better angular response, with the disadvantage of strange non-uniform crosstalk behavior when you push it.

Sensors are not intended to be used in crosstalk mode, ie you should not give wide angle lenses to them that feed them with too low angle. But what if you do anyway?

A normal LCC procedure will correct non-uniformity in pixel vingetting, ie color cast that occur due to variations in the pixel light shields/walls but will not correct crosstalk.

I started to experiment if it would be possible to extend normal LCC processing with crosstalk cancellation. I started by setting up a mathematical model of how crosstalk flows between pixels and then reversing it. Unfortunately this model has much too many unknown variables that you won't get from the ordinary white LCC shot. By extending the workflow with shooting one more LCC, one with a red filter (a Wratten 25 gel taped to your LCC card will do) I got some more information of the sensor's crosstalk behavior into my model that then started to produce useful results.

I present some preliminary results here. I don't have access to any exotic equipment like the IQ250 and 32HR, so I made an experiment with the Aptus 75 (7.2um Dalsa CCD) and an SK35XL. This Dalsa sensor is quite good at supressing crosstalk vertically, but not horizontally. With the sensor shifted in landscape mode effects of crosstalk is seen. There are not huuuge degradations to color, but they can be noticed.

In the experimental setup I have shot the same color checker under the same light close to the center of the lens and then at the image circle border (90mm diameter). I made a normal LCC shot and then a red-filter LCC. The normal is used to correct the color cast, the red-filtered LCC to figure out crosstalk and cancel that. To make a better measurement of the crosstalk you would need green and blue filtered LCC shots too, but the symmetry assumptions does not seem to be a too large error source compared to others, the algorithm can also use the white LCC for "basic" information. If the algorithm would require say four LCC shots it would be just too cumbersome to use anyway.

In the resulting images you should see desaturation in the image that has only been LCC corrected, which then is restored when crosstalk is cancelled. The precision in color reconstruction is significantly less than normal color cast correction though (due to the algorithm does not have complete information) so I cannot say it's a great idea to push a sensor into crosstalk on a regular basis, but if you have to do it on occasion a crosstalk cancellation algorithm can be useful.

I see this crosstalk cancellation algorithm could be useful for some particular senor/lens combinations; IQ250 and virtually any tech wide, possibly Sony A7r on Arca-Swiss MF-two / Rollei X-Act etc and wides, and even older backs with "extreme" lenses like the SK35XL with larger shifts. I assume the 6um dalsa sensors (P65+, IQ160/260 etc) have a little bit more crosstalk than the larger 7.2um pixels of my back, I know some have noted desturation issues with the SK35XL on them. If you know you shift settings you can shoot a new red-filtered LCC and re-process old files. With manual photoshop work you can achieve similar results, but it's hard to do right as the crosstalk is not necessarily uniform or a purely circular as discussed above. This process is 100% automatic.

However, this is preliminary work. I still have some issues and I still don't know if it's going to be a stable enough algorithm, ie if it will actually work in more situations than just my first experimental setup. If it turns out well it will be included in a future release of Lumariver HDR (you can use it for only raw-in-raw-out LCC work even if it has tonemapping). I'll let you know.
 
Last edited:

f8orbust

Active member
Nice work Anders - and well written so that even a doofus like me can understand it.

Anyone from Phase One listening? Give this guy a loan of an IQ250 for a while, plus a few bucks to boot - you'll be doing yourself a big favor.

Jim
 

torger

Active member
:) thanks for the kind words.

I think Phase One can do it if they wanted to, it's just standard math / signal processing. The challenge lies in how to make something useful with incomplete information, and do proper weighting of various factors. If you make an exact model you get so many variables in the equation system it becomes impossible to solve. If you pick too many out the results won't be good. Figuring out which factors that are significant and which you can ignore to end up with a solvable but effective model is the difficult and test/time-consuming part.

I do this mostly because I find it to be an interesting signal processing challenge, the commercial value is very small due to the narrow use case. And as I have the issue in my own camera with the Aptus and SK35XL I have some use of it myself, although I rarely apply as large shifts as in the example.

In the example above the crosstalk is about ~20%, ie 20% of the signal jumps to the next wrong color.

On a related subject I've also noted that sensor centerfolds are very much exaggerated with angle. Ie on a shifted wide angle lens its much more likely to see sensor centerfold than on a longer unshifted lens. I'll probably look into making specific centerfold supression later on (which will be useful even if there is no crosstalk). Centerfolds are in signal terms very weak, say 1% difference or so, so it's hard for an algorithm to see it, but the eye spot these artifacts easily.

Afaik centerfolds occur due to that the chip is "stitched", ie the stepper can't expose the whole chip area at once, and each exposure varies a bit so you get a sharp fold. I've noted that the IQ250 also have this property. I've heard that some say that centerfold is due to imbalance between multiple amplifiers, but I think it's not the whole story, it's a difference in pixel vignetting between different segments on the sensor, that's why it becomes extra visible on tech wides. On my sensor the vertical centerfold is most visible, but there are also a couple of horisontal lines too.
 

gebseng

Member
Great stuff! Should we provide you with RAWs from other DB/lens combinations (like my Credo 40 with 24, 28 and 35mm SK lenses)? If yes, should the image content be similar to yours? The red filter should be the one you use for color separation, right?

Thanks,

Gebhard
 

Paul2660

Well-known member
On the centerfolding issues, Phase has done a good job in the past. You can most often see the effect of the centerfold when you take an LCC on a shifted image, 15mm or so and rarely on a center LCC. This always used to give me a bit of consternation, when I moved to the IQ160, as the P45+ never did this. I never shifted the P45+ however and only used the Mamiya lenses on it.

Phase does do re-calibrations for the centerfolding issues on the IQ backs. And many times the dealer can do this with the photographer over the phone.

I know from experience that the wider Schneiders, 35XL and 43XL will have more centerfolding problems on the 160/180 260/280 backs especially on shifts past 12mm and rise past 15mm. About 50% of the time C1 can't correct all of the centerfold and you have to manually correct in CS. The Rodenstocks seem much more forgiving on centefoldings on extreme shifts.

Phase has a great solution with the LCC on the centerfolding and seems to be constantly working on improvements. I would be interesting and hopeful that they might consider your work on the cross talk and attempt to add some of your algorithim to future releases of C1. This would help no only the 250 issues but possibly 260/160/180/280 also.

Paul C.
 

torger

Active member
Centerfolding due to amp calibration is a separate issue from centerfolding due to "stitched" sensor and pixel vignetting. As far as I know my Aptus 75 has only one amp channel so it has not the calibration issue.

When you see different centerfolding when you shift, that's because of the pixel vignetting factor.

I have still many many hours of work left to even know if the crosstalk algorithm can be stable enough for widespread use, and I can still do lots of testing just using my own gear so I don't need any help with test pictures yet.

Yes the red filter would be one intended for color separation, a Wratten 25 or Wratten 29 or corresponding would do (I've used at Wratten 25). The idea is to register red with as little green and blue as possible as you then can measure crosstalk with much better precision than from the white LCC shot. Theoretically you could do it on the white LCC, but with the channels so close together you can't solve the crosstalk equations with sufficent precision.

Some leaks into green is not too bad (eg a red filter which lets a little green in is okay, which the Wratten 25 does), but blue must be supressed. Otherwise it becomes hard to solve the system in diagonal positions.

Since I (and my company) sell our own software, I compete with C1 in this aspect. My intention is to do a better job than C1 does, we'll see how I succeed. I haven't actually made any comparative tests with C1's and Lightroom's LCC algorithms yet, I'm busy enough as I am. Our software merges much better into a LR workflow for the moment (we do raw-in dng out) as C1 doesn't do DNG well. I might look into trying to export IIQ raws so you can use it more smoothly in a C1 workflow, but that's yet another big effort for few sales :).
 

Paul2660

Well-known member
C1 could easily do dng, they just choose not too, a huge oversight to me. I know many people feel there is more than one dng.

I don't know, don't write code. LR seems to be able to handle multiple dngs with no problem.

If C1 wants to get to a leader position, they need to re-consider the dng issue.

C1 actually does see dngs just fine, however only for a few seconds. When you open a folder with a dng, the file loads clean for a few seconds, then turns basically red. I not sure if C1 is seeing an embedded jpg or what. However I didn't think LR or ACR embeds a jpg when you convert a raw file to a dng. It's realy too bad that Phase One so concerned about the Lecia S and S2 that they keep dng support out of C1. That's pretty short sighted to me.

dng support is very important since new camera are coming out much faster than the raw converters can support them. Case in point is the Fuji X-T1 which currently is not supported by either LR or C1.

The raw file is basically the same as the X-e2, and if you change the name header from X-T1 to X-E2 opens the files with no problem, all the rest of the exif info can be left alone. NET, there is not that much that has to be done, but a line of code to recognize X-T1 when C1 opens the files.

Paul C
 

torger

Active member
Did some quick LCC tests with C1 7.2 to just see where it stands, it's been a while since I used it. It's indeed quite good at supressing centerfolds, only with very extreme processing settings I see faint lines from the six segments I have on my sensor. It's much better than my current algorithm as I don't have any specific centerfold detection in there yet, but I'll fix that (must be there to make my algorithm valuable as if there's crosstalk there's centerfold) and try to do it better than C1 in the process, we'll see :). C1 LCC doesn't do any crosstalk cancellation though, but I knew that.
 

torger

Active member
I'm familiar with the DNG format from a programmer's perspective, and I do understand why Phase One is a bit slow adopting. The thing is that the DNG standard is not just a raw container, it also incorporates a color model, DCP. In order to display DNGs according to Adobe's standard you must have a color model which follows the standard, ie the DCP model, which is different from Capture One's.

It is indeed possible to have two parallel color models in the software (I plan to have that in my own at some point to widen the possible workflows), but it would be quite messy. Loading a DNG file, ignoring the embedded DCP and apply Capture One's ICC instead which works in a different way. It would be a messy mixture. Adobe Lightroom uses DCP natively so it breathes DNG, it's different for them.

I can also imagine than Phase One, that see themselves as masters of color, would not be comfortable with incorporating Adobe's color model. Even if DNG is open Adobe is the leading authority of the format (it's not exactly documented why the color model is designed the way it is, and how you make the best out of it when designing camera profiles), and any competitor adopting the format more than just something on the side may become number two behind them.

Many think that Capture One has superior color rendition compared to Adobe's Lightroom, and that's probably right. I don't think it's bound to the color model, Adobe's DCP-based or Phase One's ICC-based, but rather profiling. Profiling must be made in relation to the color model though, so while Phase One may be able to make great color with their own color model, they might not be as successful with Adobe's.

In the end it's probably more about politics than technical issues though. I think it was a mistake by Adobe to make the DNG format so strongly connected to their specific color model, they should have kept it as a "dumb" open raw format container, then it would have been less challenging politically.
 
Top