The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Quarter-full sensor for m4/3?

photoSmart42

New member
So here's a variation of a question I asked a few months back regarding a better sensor in this format. I see no compelling reason for a m4/3 manufacturer to not take a FF sensor (e.g. a D3s or 1D sensor), divide it into fourths, and create 4x 4/3 or m4/3 sensors out of it with the characteristics of the FF sensor. It makes sense from a manufacturing point of view since you're using the same platters and getting the economies of scale, and it makes sense from a photo quality point of view because you'd get pro benefits from a smaller format camera.

Certainly you'd end up with 6 MP m4/3 cameras using current FF sensor technology, but assuming the resolution wars are indeed over as it's being claimed, that shouldn't be an issue. Personally I'd rather enlarge a 6 MP photo (not that I'd need to) taken with a FF-like sensor than a 12 MP photo taken with a native 4/3 or APS-C sensor. If I can have the quality and performance of a professional FF sensor I'd be sold instantly. Throw in a shutter-less, mirror-less camera to hold it, and I'm in. The early FF DSLRs came with 6 MP sensors on them, and it was good enough resolution for professional grade photos, so I think it's an easy argument to make. I'd buy a 6 MP m4/3 GH2 that has a quarter-full sensor in a heartbeat.

Any thoughts?
 
V

Vivek

Guest
Won't happen. How much of a resolution would you then have for videos?
 
W

wonderer

Guest
I am not sure why are you calling it a Quarter Full Frame. All you are saying is that they they should make a m4/3 sensor with less megapixels so that the pixels are larger and thus give better DR and noise characteristics. There is no need to "chop a full-frame sensor" for that.
 

photoSmart42

New member
I am not sure why are you calling it a Quarter Full Frame. All you are saying is that they they should make a m4/3 sensor with less megapixels so that the pixels are larger and thus give better DR and noise characteristics. There is no need to "chop a full-frame sensor" for that.
What I'm saying is that you take a 24MP FF sensor as it is (i.e. the actual platter on which camera sensors are born), and divide it into 4 pieces to create 4 4/3 sensors out of it that you'd drop into a m4/3 camera - same platter, just getting 4x the yield out of it. No need to create a new sensor - just use an existing sensor and make four smaller ones out of it so you inherit the qualities of the larger sensor. Certainly an alternate path is to improve the existing 4/3 sensors, but to get a 4/3 sensor to the quality of a FF one will take considerable time and money. To me it's a no-brainer to take something that exists and use it as such.
 
V

Vivek

Guest
It isn't that simple, is it? Smaller sized sensors are easier to fabricate than larger sized ones.

There will be lots of improvements coming soon.

And when those (improvements) are applied to a larger sensor, they will always look better..

BTW, the 4/3rds do have full frame sensor. Just a different format.
 

kevinparis

Member
I assume you were being sarcastic Vivek when 1080 Hd is only 2 megapixels (1920*1080 pixels)

as to photosmart... well dont think things really work that way in silicon manufacture... you cant just cut it up like film :)
 
W

wonderer

Guest
First you cannot just cut a sensor into 4 - sensor does not only have the array of dots, it also has things like io pins, readout circuitry etc etc. Just from a technical point of view, what you are saying is not going to work.

Second it is very likely that the "quarter" of a 24MP FF sensor (say in 5D Mark II) will not be any better than the current 12MP olympus and panasonic m4/3 sensors and chances are it will be worse.
 

photoSmart42

New member
It isn't that simple, is it? Smaller sized sensors are easier to fabricate than larger sized ones.
Yes and no. One of the reasons larger sensors, just like larger processors (because sensors are processors), are more expensive is because there's a lot more wasted space on the platter. You're fitting a grid of rectangular pieces unto a circular platter, and the larger those pieces are, the more wasted platter space you're going to throw away, which increases the cost per chip.

My thought was to take the exact same chip design, break it up into smaller pieces (the exact 2x crop works well here because it's a nice way to divide it up), and keep everything else the same. You end up with more chips on the platter, thus realizing the cost savings, and you end up with high-quality sensors in smaller cameras. it becomes more a manufacturing problem than an R&D problem, and R&D is where 80% of the cost of new products is spent.
 
V

Vivek

Guest
I think the reverse is very doable. Add up smaller sensors (side by side, not glue them together) and make a big frame image.

Or just use Phase' pixel shift patented technology to get more out of small area sensors.

Kevin,..
 

photoSmart42

New member
Second it is very likely that the "quarter" of a 24MP FF sensor (say in 5D Mark II) will not be any better than the current 12MP olympus and panasonic m4/3 sensors and chances are it will be worse.
Why would it be worse? FF sensors have far better dynamic range and ISO performance over the 4/3 sensors, so taking a sensor that has all that and taking a piece out of it should translate into maintaining that quality.

I understand about cutting up the sensors not being exactly precise - I was being metaphoric. The fact is that sensors are indeed microchips, but they're made up of a collection of individual arrays each representing a pixel. You can certainly take sections of those pixel arrays so they terminate with fewer total pixels without changing the design of the individual pixel arrays. So in that sense you 'cut up' what would be a FF sensor along the pixel demarcation to create smaller, quarter-FF sensors.
 

kevinparis

Member
or do like pro video cameras do... have 3 sensors one each for red green and blue....

or on the other hand just accept what your camera does and take photos

K
 

photoSmart42

New member
I think the reverse is very doable. Add up smaller sensors (side by side, not glue them together) and make a big frame image.
The limitation there is that it's better for photo sensors to actually have larger pixels than smaller ones. That's what allows you to have the high dynamic range and high-ISO performance you see from professional cameras. Smaller pixels cause confusion and image smearing because they have a harder time figuring out which part of the image is sent to the processor from which pixel.
 
V

Vivek

Guest
The limitation there is that it's better for photo sensors to actually have larger pixels than smaller ones. That's what allows you to have the high dynamic range and high-ISO performance you see from professional cameras. Smaller pixels cause confusion and image smearing because they have a harder time figuring out which part of the image is sent to the processor from which pixel.
Not sure about that.

Is D3 more pro than D3X?
 
R

raymondluo

Guest
If they're not making the lens for it, it probably won't happen.
 

photoSmart42

New member
Not sure about that.

Is D3 more pro than D3X?
No, because the technology got better in terms of making traditionally large pixels smaller while maintaining or improving performance. Undoubtedly that will happen for the smaller sensors as well, and I'm not doubting that. My only point is that I see a possibility of taking existing technology and using it as-is currently, without major R&D, to improve small camera performance while the technology to improve the smaller sensors progresses at its normal pace.
 

kevinparis

Member
smaller pixels give worse results because they gather less light/have less photons excited/ give a lower signal... nothing more... there is no issue about working out what to send where... the sensor doesn't think or work out anything... just gathers light... in fact as far as i understand the sensor sees in analogue... it has to because light is... its the processing of that analogue signal to digital thats critical... low signal collected.. les info to work with
 
V

Vivek

Guest
My only point is that I see a possibility of taking existing technology and using it as-is currently, without major R&D, to improve small camera performance while the technology to improve the smaller sensors progresses at its normal pace.

You are perhaps not aware or overlooked that the NMOS sensors used in m4/3rds are much cheaper than the other sensors (the earlier Kodak sensors in 4/3rds, for example). These are on a flex strip. All others are in a nice ceramic casing with gold plated connectors and what not.

Very different technologies and fabrication processes.
 

photoSmart42

New member
smaller pixels give worse results because they gather less light/have less photons excited/ give a lower signal... nothing more... there is no issue about working out what to send where... the sensor doesn't think or work out anything... just gathers light... in fact as far as i understand the sensor sees in analogue... it has to because light is... its the processing of that analogue signal to digital thats critical... low signal collected.. les info to work with
Not quite true. The whole point of 'digital' is that the processor digitizes that light information that comes in. Each pixel collects light on its own, and sens that information to the processor. The individual pixels are, by definition, the smallest unit of measure in the system. The processor does a significant amount of work to figure out what to make with the information it gets from each pixel, and how to put it all together into something that it thinks is a fairly decent representation of the analog image that comes in through the lens. Same principle applies to other situations where arrays of pixels are involved, such as in synthetic RADAR applications.
 
Top