The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Quarter-full sensor for m4/3?

photoSmart42

New member
You are perhaps not aware or overlooked that the NMOS sensors used in m4/3rds are much cheaper than the other sensors (the earlier Kodak sensors in 4/3rds, for example). These are on a flex strip. All others are in a nice ceramic casing with gold plated connectors and what not.

Very different technologies and fabrication processes.
You're looking at the fab process a few steps beyond what I'm discussing. All chips (including sensors) are born on wafers. The smaller sensors are made on smaller wafers, while the larger ones require larger wafers by definition. I'm talking about taking the large-wafer chips and doing the segmenting there. What happens to the assembly beyond that to make them into sub-assemblies (whether those sub-assemblies are on wafers or in ceramic packages) is irrelevant to what I'm proposing.
 
V

Vivek

Guest
For your sake, let us hope someone will pick up your proposal and cut up those wafers.

I will keep my fingers crossed.
 

kevinparis

Member
Not quite true. The whole point of 'digital' is that the processor digitizes that light information that comes in. Each pixel collects light on its own, and sens that information to the processor. The individual pixels are, by definition, the smallest unit of measure in the system. The processor does a significant amount of work to figure out what to make with the information it gets from each pixel, and how to put it all together into something that it thinks is a fairly decent representation of the analog image that comes in through the lens. Same principle applies to other situations where arrays of pixels are involved, such as in synthetic RADAR applications.
the flaw there is that the sensor sends nothing to the processor... it reacts to a light level dependant on what colour filter happens to be over it... a A/D convertor takes the analog value from the sensor... converts it to digital and then and passes it onto the processor. The processors job is to decode the whole mess of information and recreate the analog information

anyway thats how i understand it... i have been wrong before
 
W

wonderer

Guest
Why would it be worse? FF sensors have far better dynamic range and ISO performance over the 4/3 sensors, so taking a sensor that has all that and taking a piece out of it should translate into maintaining that quality.
A 6MP cutout from the 24MP FF sensor will have better noise and dynamic range at per-pixel level but the 12MP m4/3 sensor has more pixels. If you down-sample the 12MP image to 6MP, you will eliminate the per-pixel advantage. One way to think about it is that the total amount of light falling on the sensor will be the same in each case. In one case you are chopping that up into the smaller chunks so that the quality of each individual chunk is lower, and in the other case you are using a smaller number of better chunks. However when these are assembled into a complete image, the per-image noise characteristics will be comparable.

And on top of that the 12MP image will be able to capture more detail. Also the Olympus / Panasonic sensors are designed with the aim of getting most out of the "quarter sensor" whereas the FF sensor designer has much larger sensor area available to play with, so it is likely that a "quarter" section of that sensor will not be as well optimized as a dedicated m4/3 sensor.
 

photoSmart42

New member
If you down-sample the 12MP image to 6MP, you will eliminate the per-pixel advantage. One way to think about it is that the total amount of light falling on the sensor will be the same in each case.
Not true. The FF pixels are larger than the ones on the GH1 for example. Perhaps you're confusing the 2x crop factor with the difference in sensor sizes. The FF sensor has 4x the area of the 4/3 sensor! The larger pixels on the FF sensor is what allows the FF sensors to provide so much more information to their camera processor. The smaller individual pixels in the smaller sensor provide a much smaller bandwidth of information because of their size - being small means they have lesser ability to deal with noise, which is why the GH1 isn't as good in high ISO as a FF camera. As technology improves, the smaller pixels will eventually be able to do better, approaching today's FF sensor quality.

I'm almost willing to bet a paycheck that a 4/3-sized 6 MP cropped version of the D3s sensor, for example, would generate better quality images with more dynamic range and higher ISO sensitivity than the 4/3-sized 12 MP sensor in the GH1.
 

photoSmart42

New member
the flaw there is that the sensor sends nothing to the processor... it reacts to a light level dependant on what colour filter happens to be over it... a A/D convertor takes the analog value from the sensor... converts it to digital and then and passes it onto the processor. The processors job is to decode the whole mess of information and recreate the analog information

anyway thats how i understand it... i have been wrong before
Definitely not what actually happens. The camera sensor is made up of millions of individual pixels (hence 'mega pixel'), and each one of those is an actual light sensor that send information to the processor. Each pixel has an assigned location on the grid, and the processor takes all that information and puts together an image from that. There's no guesswork done by any part of the system. The processor interprets the value of signal from each pixel, and translates that into a color band, that then turns into a color pixel on the photo. Just because analog light hits the sensor doesn't mean it's an analog sensor. The only analog sensor is film. Think of an insect eye analogy - each pixel is one of those individual elements of the insect eye.

Here's some sample reading that explains a bit how sensors work: link
 
Last edited:

RichA

New member
It won't happen, because no one wants it.

What I'm saying is that you take a 24MP FF sensor as it is (i.e. the actual platter on which camera sensors are born), and divide it into 4 pieces to create 4 4/3 sensors out of it that you'd drop into a m4/3 camera - same platter, just getting 4x the yield out of it. No need to create a new sensor - just use an existing sensor and make four smaller ones out of it so you inherit the qualities of the larger sensor. Certainly an alternate path is to improve the existing 4/3 sensors, but to get a 4/3 sensor to the quality of a FF one will take considerable time and money. To me it's a no-brainer to take something that exists and use it as such.
Problem is, the mfgs and users don't share that view. They want more megapixels, not fewer, which is why the D3x is $8000 and the D3 is $5000. It would have been interesting to see what they could have done with the Olympus E-1 and its 5 megapixels with a modern sensor and processing, but it isn't going to happen. In reality, do we really need clean 3200 ISO images? Who wants images that have low noise if DR is reduced dramatically relative to 200 ISO and colour suffers? Most people who need the kind of noise reduction offered by sensors like that of the D700 will simply buy that camera, or faster lenses, or both. There will never be another DSLR or micro 4/3rds under 12 megapixels.
 
W

wonderer

Guest
I'm almost willing to bet a paycheck that a 4/3-sized 6 MP cropped version of the D3s sensor, for example, would generate better quality images with more dynamic range and higher ISO sensitivity than the 4/3-sized 12 MP sensor in the GH1.
Unfortunately the quarter crop of a D3s sensor would only be 3MP, not 6MP. To get a 6MP sensor you will have to crop the D3x, A900 or 5D Mk II sensors. Unfiotunately I don't have any of these cameras otherwise I would have been more than happy to take you up on your paycheck offer :)

We can go on debating this forever with no agreement :) nevertheless I will make one more post later when I have more time to write down some math. If you are not convinced even after that then we will have to agree to disagree :)
 

photoSmart42

New member
Unfortunately the quarter crop of a D3s sensor would only be 3MP, not 6MP. To get a 6MP sensor you will have to crop the D3x, A900 or 5D Mk II sensors.
Obviously the 6MP assumption holds for a 24MP FF sensor. Doesn't matter which camera it comes from - irrelevant to the discussion.

Obviously we'll never be able to resolve this issue because I'm not a multi-millionaire sitting on the ability to make my own camera sensor out of a FF one. The math makes sense to me because I can certainly compare a 2x crop of a 24 MP FF DSLR (i.e. a 6-MP equivalent slice of the image) with a full-sensor image of a 12-MP 4/3 camera and know that I'm looking at better quality. It's no different of a concept than what I'm talking about.
 
Last edited:

monza

Active member
A 24mp full frame sensor is 4x the area so it will take four times as many wafers to produce the same quantity of sensors as micro 4/3, all other things being equal. In addition the yield of a smaller sensor is probably greater. From that standpoint, a 6mp micro 4/3 sensor would cost significantly less and offer the same characteristics (high ISO performance, etc.)

6mp is more than enough for most applications.
 

pellicle

New member
What I'm saying is that you take a 24MP FF sensor as it is (i.e. the actual platter on which camera sensors are born), and divide it into 4 pieces to create 4 4/3 sensors out of it
its not like slicing up sheets of 8x10 to get 4 4x5 sheets

what wonderer said is exactly right.

Personally I wouldn't, but I would love something like a Bessa T with a 6 MP full frame sensor in it, unless it was priced like a Leica instead of a Cosina
 

pellicle

New member
Hi

Obviously the 6MP assumption holds for a 24MP FF sensor. Doesn't matter which camera it comes from - irrelevant to the discussion.
...
It's no different of a concept than what I'm talking about.
well if you put it that way, I agree.

Personally I reckon that 6MP is a very good target, I know I've sized up 6MP images in Photoshop and printed nicely to 47cm wide and been satisfied with them at normal viewing distances. My wife has some A4 sized prints from my 10D on her wall at work and comments from fellow workers are reported to be good.

I recall some time ago reading that 6MP was all that was needed and a comparison was made back in 2007 with 3 portraits set up for public evaluation. It seemed that few people were able to tell the differences 5, 8 and 13MP images at the size printed to the same size.

Of course format makes a difference for stuff like DoF and also lens interchangablity.

so yes, I'd be happy with a clean as a whistle 6MP 4/3 camera
 

kevinparis

Member
Not wanting to get picky...but don't like being told I am wrong when I am not.... but the link you sent me describes almost exactly what I described as happening.

When I was talking about the sensor I was meaning the sensor in each individual cell of the overall sensor

The actual part of the sensor that detects light is analog... it can't be anything else

"The next step is to read the value (accumulated charge) of each cell in the image. In a CCD device, the charge is actually transported across the chip and read at one corner of the array. An analog-to-digital converter turns each pixel's value into a digital value. In most CMOS devices, there are several transistors at each pixel that amplify and move the charge using more traditional wires. The CMOS approach is more flexible because each pixel can be read individually."

K


Definitely not what actually happens. The camera sensor is made up of millions of individual pixels (hence 'mega pixel'), and each one of those is an actual light sensor that send information to the processor. Each pixel has an assigned location on the grid, and the processor takes all that information and puts together an image from that. There's no guesswork done by any part of the system. The processor interprets the value of signal from each pixel, and translates that into a color band, that then turns into a color pixel on the photo. Just because analog light hits the sensor doesn't mean it's an analog sensor. The only analog sensor is film. Think of an insect eye analogy - each pixel is one of those individual elements of the insect eye.

Here's some sample reading that explains a bit how sensors work: link
 

iiiNelson

Well-known member
Same principle applies to other situations where arrays of pixels are involved, such as in synthetic RADAR applications.
RADAR literally doesn't use pixels. Pixel is an abbreviation for picture element. RADAR is an acronym for Radio Distance and Ranging and it works in all forms by "imaging" radio waves that are reflected back to the sensor. All energy that is diffused, absorbed, or scattered is effectively lost. Photosensors pretty much only measure whatever is passively collected on the sensor.

It's not related to photo imaging in any way except that both lie on the electromagnetic scale to be honest...
 

brianc1959

New member
. . . assuming the resolution wars are indeed over as it's being claimed . .
Thank goodness this is a bad assumption! I can't imagine that sensor resolution for 4/3 won't at least double in the near future to 25MP, and then corresponding 35mm and MFDB will be at 100MP and 200MP, respectively.
 

photoSmart42

New member
I can see it's pointless to continue this conversation as far as I'm concerned. I'm trying to explain what to me was a simple concept to folks with about 15 different levels of technology backgrounds and experiences. Regardless of how simple or complex I get in my explanation and my analogies, someone will get offended, and that misses the original point I was trying to make.

Thanks for entertaining my flight of fancy. I'll stick to photo discussions from now on.

Cheers!
 

woodmancy

Subscriber Member
. . . . The actual part of the sensor that detects light is analog... it can't be anything else . . . .

K
Remember that light radiation has a duality - waves and particles (photons).

You can measure either one. Photon counting has been around for at least forty years. I guess using it at the micro level on one of these chips is a challenge. But I've been out of this for so long, I don't know the current status (but I bet they are working on it).

Keith
 
V

Vivek

Guest
I can see it's pointless to continue this conversation as far as I'm concerned. I'm trying to explain what to me was a simple concept to folks with about 15 different levels of technology backgrounds and experiences. Regardless of how simple or complex I get in my explanation and my analogies, someone will get offended, and that misses the original point I was trying to make.

Thanks for entertaining my flight of fancy. I'll stick to photo discussions from now on.

Cheers!

Yes, the problem is there. Trying to express your requirement clearly, unambiguously and without resorting to oversimplified expressions ("cutting up a sensor wafer") wasn't that easy...

What was the original point?
 
Top