Douglas,
The Sony A900 and Nikon D3x sensors have different die markings (these are the part numbers and test probe pads for example) according to photographs of the two sensors (everyone may not know that the camera is typically rendered useless after the analysis process
, they are produced in different batches, have different quality control procedures (not that one is better). The photosite array is identical between the two, geometry and read lines. They are run on the same production line, how different is hard to know without seeing the masks side-by-side or pay one of the reverse engineering companies like Chipworks to get in with the microscopes. (BTW the "mask" is the "negative" that is used to "print" the different layers of the "chip" or sensor for those not familiar with the terms and yes this is oversimplified).
Most folks who have analyzed the design also believe that the analog to digital converters that Nikon is using are higher performance than the Sony design. Looking at the Nikon circuit boards it is pretty easy to tell who makes them
of course the Sony design was aimed at an entirely different price point.
I have not had the opportunity to shoot an A900. I hope to do that later this summer and hopefully have it for long enough to put it through all of the same detailed tests. It would be interesting to verify if the color response is that different and in what ways - the color filter arrays (CFA) are different so this is certainly likely, the firmware and read out play a big role as well.
As you say the D3 sensor is stitched since it is produced in a factory that has a maximum lithography field size of 25mm by 33mm (it has two halves which are printed left side then right side, in two passes through the machine - for those not familiar with semiconductor stitching it is pretty much like stitching for panoramas but the alignment has to be incredibly accurate). My understanding is that the D3x and A900 sensors ARE stitched and are made on a photolithography system using two passes to get to the 36mm x 24mm needed for full-frame. 50mm x 50mm machines are becoming cost effective (for those not familiar with what these machines are they are basically big complicated microscopes acting as projectors and 50mm x 50mm is pretty much as big as they are going to be able to project in one pass) so this should further reduce the costs of the 24-megapixel FX sensor by increasing the throughput of the factory, allowing more wafers to be processed per hour which reduces their cost (each 8 inch wafer can make 20 full frame sensors and usually about 70 to 80 percent of them are usable).
BTW Nikon holds about a 65 percent market share in the photolithography stepper and scanner machines used for manufacturing semiconductors of all kinds, including Intel's microprocessors. At last count they had shipped over 8,000 systems and there are only about 2,000 factories that can make semiconductors in the world. Nikon leads the market in 45 nanometer and 35 nanometer machines (this is how small a feature can be on the chip - a wire or a transistor, it is really small and at the state-of-the-art).
-glenn
(edited to correct the stitching comments)
Glen, good report. I wanted to ask you to be a little more specific about a couple of things. Firstly, every piece of evidence that I've seen points to the D3x and A900 being the same silicon, with different CFA and AA toppings, and Nikon doing a multiple sample read off of the sensor for their 14bit mode, thus resulting in the slower fps. Do you have evidence to contradict this? Have you shot the A900 and D3x side by side? Reports from dual users of both cameras say the D3x leads in dynamic range, whereas the A900 has much better spectral properties.
Also, I know that the D3 is a stitched FF sensor, so is the full frame, single sensor stepper new? Thanks. -d