The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

70 Mpix on CMOS announced

Stefan Steib

Active member
OK this is NOT a MF chip but this rivals them and will put a threat on this business, the clock is ticking - we have a 70 Mpix CMOS announced for January with 3 frames per second, it was shown in October 27th - the only problem is: It´s 3.1 micron and only 31x21mm(calculated from data 10,000 by 7,096 pixels at a 3.1 µm) in Size. Available as BW and RGB version.

http://www.cmosis.com/news/press_re...-high-resolution_cmos_industrial_image_sensor

I´m sure this is only the first of a new breed of chips to come.

also interesting - the following sentence from this press release:
"Since the CHR70M started its life as a custom imager for biometric applications, the sale of this imager for these kind of applications is excluded."
So this was a customer driven devellopment. I don´t know how large the biometric market is, but it seems to be larger than the MF market.......:confused:

Greetings from Munich
Stefan
 

PeterA

Well-known member
the biometric market is rather large my friend...eg retinal scanning/face recognition to thumb print scans on Bloomberg keyboards...
 

EsbenHR

Member
OK this is NOT a MF chip but this rivals them and will put a threat on this business, the clock is ticking - we have a 70 Mpix CMOS announced for January with 3 frames per second, it was shown in October 27th - the only problem is: It´s 3.1 micron and only 31x21mm(calculated from data 10,000 by 7,096 pixels at a 3.1 µm) in Size. Available as BW and RGB version.
Well, I think it is unlikely that these sensors will become a real alternative to MF, even if you mostly care about megapixels. Even with breakthroughs in deconvolution, there are still some hard rules in physics that are pretty hard to get around - at least for the usual wavelengths we use in imaging.

It does look like it is implemented like a MF chip in some ways. The AD-converters are off-chip, and there are only 8. Modern CMOS sensors put a ton of these directly on the imaging sensor which helps keeping down the clock, heat and noise. This is one of the real advantages of CMOS in my oppinion.

I think the NEX-7 et al. will show this as well; there is hardly any rational non-marketing point in reducing the pixel size that much for most photographic applications. Of course it can make perfect sense for other applications, and I suspect it makes a ton of sense in many biometric and scientific applications.
 

dougpeterson

Workshop Member
I think the NEX-7 et al. will show this as well; there is hardly any rational non-marketing point in reducing the pixel size that much for most photographic applications. Of course it can make perfect sense for other applications, and I suspect it makes a ton of sense in many biometric and scientific applications.
I know nothing of the biometrics market, but it seems very likely that not much DOF is required for a scan of a fingerprint on a thumb smushed against a piece glass. Therefore diffraction limitations wouldn't be much of an issue there (I don't even want to calculate the highest usable aperture for a 3 micron sensor - I think it would make me very sad). Also in that application is seems likely that illumination would be highly controlled and "scene contrast" would be quite low, allowing an easy ETTR exposure without any shadows so dynamic range, color fidelity in the shadows, and noise would not likely be an issue.

Doug Peterson (e-mail Me)
__________________

Head of Technical Services, Capture Integration
Phase One Partner of the Year
Leaf, Leica, Cambo, Arca Swiss, Canon, Apple, Profoto, Broncolor, Eizo & More

National: 877.217.9870 *| *Cell: 740.707.2183
Newsletter | RSS Feed
Buy Capture One 6 at 10% off
 

Shashin

Well-known member
Photography is still light dependent. You still need photons and spot sizes cannot be infinitely small--wavelength does mean something. While Sony has done some incredible things with their 24MP APS sensor, you can clearly see the problems of small pixel pitches in examples around the web. I hope the future is more about larger chips and larger pixels, than the current trend of smaller chips and smaller pixel pitches. I would have rather seen Sony pull 35mm sensor cameras down in price, than pack more into their APS cameras.

And I really don't need more pixels. Storage and computing power for 70MP images would be a real problem, especially if you start to stitch on top of single-frame images. No one can really see the difference between the 22MP and 40MP medium-format images I produce now. I am more interested in the quality of the image, both in terms of sensor response and in terms of how the optics work with the sensor. I would rather a large sensor with a large pixel pitch and optics that have more contrast over resolving power. I am finding that to meet the requirements of smaller pixels, optical engineers are having to push resolving power which results in flatter images and there is so much you can do with that in PP.
 

Stefan Steib

Active member
Hi Doug

I don´t think this chip is for fingerprints. With that resolution this is more likely used for face recognition which is actually a full kind of portrait photography.
Maybe they also use it like Canon on the Wondercam (120 MPIX also a CMOS ! ) for face recognition of/in groups.
But of course as long as there are no further specs available it cannot be defined exactly. On the other hand - it would not make much sense to offer this to a public usage if the specs would limit it to the sole purpose of 2 D flat scanning.

Greetings from Munich
Stefan
 

Thierry

New member
Photography is still light dependent. You still need photons and spot sizes cannot be infinitely small--wavelength does mean something. While Sony has done some incredible things with their 24MP APS sensor, you can clearly see the problems of small pixel pitches in examples around the web. I hope the future is more about larger chips and larger pixels, than the current trend of smaller chips and smaller pixel pitches. I would have rather seen Sony pull 35mm sensor cameras down in price, than pack more into their APS cameras.

And I really don't need more pixels. Storage and computing power for 70MP images would be a real problem, especially if you start to stitch on top of single-frame images. No one can really see the difference between the 22MP and 40MP medium-format images I produce now. I am more interested in the quality of the image, both in terms of sensor response and in terms of how the optics work with the sensor. I would rather a large sensor with a large pixel pitch and optics that have more contrast over resolving power. I am finding that to meet the requirements of smaller pixels, optical engineers are having to push resolving power which results in flatter images and there is so much you can do with that in PP.
+1

Thierry
 

Wayne Fox

Workshop Member
And I really don't need more pixels.
More pixels could be about something other than resolution ... oversampling at the sensor and then downsamplling the data before use could be very beneficial. Perhaps a 120mp sensor that by nature produces a file equivalent to a current 60mp camera - if done right those files might be substantially better than a current 60mp camera.

Just like the CPU companies figured out how to get more out of CPU's by multi cores and multi threading rather than raw clock speed, I think there is a lot to be said for getting more out of sensor with substantial sensel count increase but abandoning the 1:1 ratio of sensels to pixels. Imagine a 240mp 645 sensor that is designed to produce "only" 60mp images ... basically it always operates in sensor+ mode. You have 4 sensels specifically designed to produce one single pixel of the image. and yes, it would only be a "60mp" camera yet might blow current ones away with the quality.

Less/no moire, substantially less noise, possibly amazing dynamic range especially if the filters over the sensels were designed differently (no more bayer algorithm but something more sophisticated, maybe not even RGB filters but some exotic combinations of color filters).

I think there are better ways to do everything but technology must catch up. Just hopefully there are enough engineers out there thinking out of the box so rather than just ramping up what we already have they're inventing a whole new way of doing things ...
 

Stefan Steib

Active member
Wayne

I think you are on the right path here. Adobe did have this impressing demo on features of the deblurring filter in the new Photoshop CS 6.

http://www.finestdaily.com/news/adobe-develops-photoshop-anti-blur-feature.html

This is about motion deblurring, but I can as well imagine that lenses can be specified by their parameters to use this deconvolution on the images.
I think the future capacity of Highresolution imaging systems of all kinds will heavily rely on such software.
Same applies to exposure bracketing for HDR, Superresolution and noise removal.
It is possible that this will be the real killer of large format - breaking it down into many tiny portions and
process these ultrafast to any needed size.

Which BTW is exactly what our eye and brain are doing to give us the impression of our visual sight.

Greetings from Munich
Stefan
 

EsbenHR

Member
More pixels could be about something other than resolution ... oversampling at the sensor and then downsamplling the data before use could be very beneficial. Perhaps a 120mp sensor that by nature produces a file equivalent to a current 60mp camera - if done right those files might be substantially better than a current 60mp camera.
Yes, that is quite possible.

Just like the CPU companies figured out how to get more out of CPU's by multi cores and multi threading rather than raw clock speed, I think there is a lot to be said for getting more out of sensor with substantial sensel count increase but abandoning the 1:1 ratio of sensels to pixels.
Actually I don't think it is reasonable to say that CPU companies figured out how to use multi-threading. They design multi-core CPUs because improving single-threaded performance has hit a power-wall and throwing more silicon at a single core helps little. Just cross your fingers that the software people will eventually solve the problems.

It is still very hard to write efficient multi-threaded software. I think the GPU-people helped the situation a lot more than the CPU-people did. Remember that this has been a very active area of research for at least 4 decades and we still suck.

For many workloads, the biggest benefit of multiple cores is turbo-mode: run a core at full speed until it heats up too much, then switch to another core while the first one cools off.

Less/no moire, substantially less noise, possibly amazing dynamic range especially if the filters over the sensels were designed differently (no more bayer algorithm but something more sophisticated, maybe not even RGB filters but some exotic combinations of color filters).
I don't see how this can help improve noise, but the other points are certainly valid. Sony developed a sensor with four colors (red, green, blue and "emerald"). It did not become popular for some reason, but at least the idea should work.
 

Shashin

Well-known member
Wayne fox said:
Imagine a 240mp 645 sensor that is designed to produce "only" 60mp images ... basically it always operates in sensor+ mode. You have 4 sensels specifically designed to produce one single pixel of the image. and yes, it would only be a "60mp" camera yet might blow current ones away with the quality. ...
Yes, it is called binning, but there is no advantage to four 3 micron pixels binned over a single 6 micron pixel--it will result in the same number of photons capture. So a 60 MP sensor is going to be the same as a binned 240MP sensor.

I agree that oversampling has benefits, but to what point? The human eye has a finite ability to resolve not only pixels, but also color and gradients. This becomes an accuracy/precision problem. I can make measurements in microns, but it is rather useless if I am building a house. I put up a 3.5' x 12' panoramic image taken with a 40 MP sensor. No viewer has been able to see the detail in the print. Even the printer cannot get all the information.

Now, I appreciate that I am oversampling for this work, there are real benefits. But how far does this benefit go? I am not sure of the answer. But given a choice over 40MP with large pixels and 120 MP with 25% smaller pixels, I am happy to benefit from the larger pixels.
 

Stefan Steib

Active member
Hi Esben
the four color approach is already more common: Sharp has done an RGBY panel for LCD´s.

http://gizmodo.com/5441497/sharps-fanciest-new-tvs-the-4+color-le920-series

It would be quite interesting to see if used at the input stage this will give better colorspace and noisefree enhanced gamut.
It is also possible to think of a 6 color CFA RGBYMK and mayb a subsampled lightness
signal, giving actually something like a real LAB device. (This is possible Leonardi did this already 15 years ago !)

http://users.phg-online.de/tk/images/Leonardi%20Spectrascan.jpg

here is a link google translated from German - so maybe hard to read - and BTW I erred -this was 1992 already

http://translate.google.de/translat...categories=&archive_id=3072&from=761&keyword=

Greetings from Munich
Stefan
 
Last edited:

Stefan Steib

Active member
I don`t know if I am the only one looking at this, but somehow it seems to me that many people in this traditional Photo industry are behaving like the 3 apes: Don´t see, don´t hear, don´t speak.
Everything is already here, you only have to take a look - 8K video screen and camera working prototypes 33 Mpix - as a video ! Here are the links
available as full HD on youtube:

http://www.youtube.com/watch?v=zX8Ux8vcexY&feature=watch_response_rev

http://www.youtube.com/watch?v=9U7e_quvkPQ&feature=related

this will not go away if it is ignored !
In some years from today you will be able to put this in your photobag or even in your Pocket - a 33 Mpix iPhone doing most of what we do today with so called Pro Equipment.

Take a close look how the Image in the camera is taken: they use standard 35mm lenses (Zeiss and Sigma).

Now you know for what these chips are used for !

Greetings from Munich
Stefan
 

EsbenHR

Member
I don`t know if I am the only one looking at this, but somehow it seems to me that many people in this traditional Photo industry are behaving like the 3 apes: Don´t see, don´t hear, don´t speak.
Everything is already here, you only have to take a look - 8K video screen and camera working prototypes 33 Mpix - as a video !
Well, as a traditional convention-burdened ape with a hand in R&D of digital back processing, let me share an insight I think is quite universal among me and my colleagues: the Internet is a trendy fad that will go away. Just like stereophonic playback.

Yes, cool screen. I want one.

Did you see a pixel blowup of an actual image from that 33MP camera? Do you think it will be sharp?

this will not go away if it is ignored !
In some years from today you will be able to put this in your photobag or even in your Pocket - a 33 Mpix iPhone doing most of what we do today with so called Pro Equipment.
Yeah, physics is just so eighties. Who want to mess with that stuff in this day and age!

On a more serious note, I think you tend to underestimate the underlying physics. Current color filters typically allows about 40% of visible light to pass. So you could potentially get 1.3 stop advantage if you can get by without them (i.e. Foveon, microscopic diffraction grates or whatever). The noise of the best current sensors are about a couple of photons in low light (assuming you don't use active cooling).

After that, there is nowhere to go. You need to collect more photons, i.e. you need to include photons from a larger area.

There are, of course, other ways to do this than just putting on a bigger lens MF-style. You can, for example, put a number of sensors side-by-side (also good for 3D BTW). If you put in 4, you should be able to achieve the same quality per-pixel as the current iPhone if the software is good enough. You could probably synthesize a shallow depth of field in this way.
 

Stefan Steib

Active member
Esben

this was not meant as a personal attempt, but did you take a look at the text?
This device is built by a customers research lab (NHK- the Japanese Television) and the camera showed is about 1/3rd the size as the Ikegami camera with multiple sensors. What thrills me is the sentence the developer Kohei Omura said: "this is still a hand built trial device and we want to work with a manufacturer to finalize it and make a product for actual use".

So the actual customer (NHK) built the camera to motivate a company to produce it because they want to buy it as a finalized item.......

WOW !

Japanese think different.
And I have to say I sympathize heavily !

Greetings from Munich
Stefan
 

Stefan Steib

Active member
One sentence about physics:

yesterday I had a long talk to a friend of mine working in the semiconductor industry. I was asking him a lot of things, about steppers and cmos productions and wafer sizes and prices.

But the most intriguing information that sticks to my memory now is that the actual structure size with light driven lithography is now at 30nm.
Some years ago every Physics professor would have told you that it is impossible to go on these sizes with light because of wavelenghts and diffraction. Now guess what happened ? Some clever people found out how to use the effects of diffraction to go smaller than possible if you follow the light theory, they use Water/Liquid covered lenses for exposure and the technology has still not come to a halt for light driven lithography.

So much for Physics.

greetings from Munich
Stefan

The worlds need for computers is about 2-3 for the whole planet. IBM 1947
 

EsbenHR

Member
One sentence about physics:
Some years ago every Physics professor would have told you that it is impossible to go on these sizes with light because of wavelenghts and diffraction. Now guess what happened ? Some clever people found out how to use the effects of diffraction to go smaller than possible if you follow the light theory, they use Water/Liquid covered lenses for exposure and the technology has still not come to a halt for light driven lithography.
The whole shebang is submerged in liquid, not just the lens. The speed of light is slower in the liquid so the wavelengths are shortened proportionally.

You can pull a few more tricks as well, so the diffraction limit is not quite as hard a barrier as we are sometimes led to believe. The good news (for photographers) is that it is easier in sensing than in lithography.

Unless you want to restrict yourself to taking images of fish and ship wrecks, I do not see how this will help in photography.
 

Stefan Steib

Active member
Hi Esben

if we suppose the chip they used for this camera has 24x36 (where may they have gotten this from.....? Canon ?????) then the pixel has 4,5 microns - they used a bayerscheme and interpolation, probably not different from a CCD used in the IQ´s - the IQ180 has 5,2 Microns of pixel size, means this CMOS´s pixel pitch is only a factor of 0,86 smaller than the actual best MF Back. So I say yes maybe that´s a good way to use this device because it will have probably a light sensitivity up to 50000 ASA and with this the fish and ship wrecks may be visible on the dark ground of the sea, whereas this will not work with an actual Highend CCD at 50 asa.
:cool:
Greetings from Munich
Stefan
 

EsbenHR

Member
Hi Esben

if we suppose the chip they used for this camera has 24x36 (where may they have gotten this from.....? Canon ?????) then the pixel has 4,5 microns - they used a bayerscheme and interpolation, probably not different from a CCD used in the IQ´s - the IQ180 has 5,2 Microns of pixel size, means this CMOS´s pixel pitch is only a factor of 0,86 smaller than the actual best MF Back. So I say yes maybe that´s a good way to use this device because it will have probably a light sensitivity up to 50000 ASA and with this the fish and ship wrecks may be visible on the dark ground of the sea, whereas this will not work with an actual Highend CCD at 50 asa.
Stefan
Sure. I was only trying to argue that 70MPix full-frame and 33MPix iPhone cameras have some rather fundamental hurdles to get around.

But hey, what do I know. Someone is going to turn a cell-phone into a 100MPix imaging device with multiple lenses and use interferometry to break the diffraction barrier and make me look like an idiot. Or something like that.
 
Top