The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Phase One P45+ Life

Pemihan

Well-known member
But those voices are telling me to buy more equipment!

:chug:
Then for Gods sake throw the tin foil in the trash and listen to the voices and repeat after me: buy more equipment, buy more equipment. buy more equipment...:lecture:
 

ErikKaffehr

Well-known member
Ray,

HST, that sounds like a really nice telephoto lens!

Best regards
Erik


I've worked with HST exposures that are so peppered with cosmic ray hits that it's hard to distinguish any of the real stars. This is one reason why very long exposures are subdivided into multiple shorter ones, and stacked with aggressive statistical thresholding.

Ray
 

ondebanks

Member
Ray,

HST, that sounds like a really nice telephoto lens!

Best regards
Erik
HI Erik,

Yes, it's amazing what you can do with a 57600 mm focal length telephoto!

Here's something I made with it many years ago...supernova remnant CTB80, an RGB tricolour composite with R = ionised Sulphur, G = ionised Hydrogen, and B = Stromgren y continuum band. The white circle marks the location of a radio pulsar that we were trying to locate in visible light.



I get a kick out of the fact that the Hubble has a native focal ratio of f/24, and many of its instrument modes increase that number, to as high as f/288. You can already picture a few photography "gurus" throwing up their hands in horror - "You mustn't shoot at such small f-stops - you'll get terrible diffraction softness!" - or even better - "Of course, the reason they use such slow f-ratios is to increase the depth of field" :facesmack: :LOL:

Ray
 

gerald.d

Well-known member
HI Erik,

Yes, it's amazing what you can do with a 57600 mm focal length telephoto!

Here's something I made with it many years ago...supernova remnant CTB80, an RGB tricolour composite with R = ionised Sulphur, G = ionised Hydrogen, and B = Stromgren y continuum band. The white circle marks the location of a radio pulsar that we were trying to locate in visible light.



I get a kick out of the fact that the Hubble has a native focal ratio of f/24, and many of its instrument modes increase that number, to as high as f/288. You can already picture a few photography "gurus" throwing up their hands in horror - "You mustn't shoot at such small f-stops - you'll get terrible diffraction softness!" - or even better - "Of course, the reason they use such slow f-ratios is to increase the depth of field" :facesmack: :LOL:

Ray
Is it not diffraction that is clearly visible in that shot though? After all, every single one of those stars is a point source of light, no?
 

ondebanks

Member
Is it not diffraction that is clearly visible in that shot though? After all, every single one of those stars is a point source of light, no?
Yes, indeed. Every star is a point source, so every star imaged through an optical system is itself a point-spread function (PSF). The x-shaped spikes you see through the stars are caused by diffraction at the support vanes for the secondary mirror. And although not visible at this scale, the core of each star image (PSF) is essentially an Airy diffraction pattern, the holy grail of any optical design.

My point is that
(1) being diffraction limited is good - it cannot be improved upon; and
(2) what's more important than obsessing about a specific f/stop number is what your sampling of the PSF is. With chunky pixels, it's entirely appropriate to use f/[big number].

Ray
 

gerald.d

Well-known member
Yes, indeed. Every star is a point source, so every star imaged through an optical system is itself a point-spread function (PSF). The x-shaped spikes you see through the stars are caused by diffraction at the support vanes for the secondary mirror. And although not visible at this scale, the core of each star image (PSF) is essentially an Airy diffraction pattern, the holy grail of any optical design.

My point is that
(1) being diffraction limited is good - it cannot be improved upon; and
(2) what's more important than obsessing about a specific f/stop number is what your sampling of the PSF is. With chunky pixels, it's entirely appropriate to use f/[big number].

Ray
Ok, you've lost me a bit here on the context of the discussion (not on the context of the science - I have a physics degree.)

You seemed to be inferring earlier with your sarcastic "mustn't shoot at such small f-stops" comment that those who consider such things are somehow missing the bigger picture.

Regardless of "chunky pixels", there is no arguing whatsoever about the simple fact that because that image you are sharing was shot at such a small aperture, you are losing a huge quantity of information because of diffraction problems.

I'm not referring to the cross here - I'm referring to the fact that you have point sources of light that, due to diffraction issues that are exacerbated by the size of the chosen aperture, are masking data in the image.

Look at any of the "bright" stars in that image. Due to diffraction, they are destroying data that would (theoretically) otherwise be available.

Isn't that the reason why photography "gurus" take diffraction into consideration?

Kind regards,


Gerald.
 

ErikKaffehr

Well-known member
Hi,

The case is that NASA is not stopping down, they use a barlow lens (tele extender) to increase the focal length. So f-stop is going down as it is diameter of the mirror divided the focal length.

To achieve a good spatial resolution on large pixels they need to have a large image. The angular resolution of the telescope is limited by the diameter of the lens (mirror), but an extended focal length is needed to utilise that angular resolution with fat pixel sensors. I guess that those pixels are large (200 microns?)

Now, why do they have that large pixels? Well my guess is that they try to catch photons from far away which are not very abundant. Increasing the area of a pixel increases the probability of detection.

Now let's assume that pixel size is something like 200 micron. That is 29 times the size of a P45+ pixel, but area would be 29x29 = 841 times larger. So if a p45+ pixel would collect 10 photons the 200 micron sensor would yield 8410 counts. Now, the P45+ has a readout noise of about 10 electron charge, so SNR would be 1, that is a barely usable signal. The sensor on Hubble may have much lower readout noise as it is in a very cold environment like 78K.

Best regards
Erik


Ok, you've lost me a bit here on the context of the discussion (not on the context of the science - I have a physics degree.)

You seemed to be inferring earlier with your sarcastic "mustn't shoot at such small f-stops" comment that those who consider such things are somehow missing the bigger picture.

Regardless of "chunky pixels", there is no arguing whatsoever about the simple fact that because that image you are sharing was shot at such a small aperture, you are losing a huge quantity of information because of diffraction problems.

I'm not referring to the cross here - I'm referring to the fact that you have point sources of light that, due to diffraction issues that are exacerbated by the size of the chosen aperture, are masking data in the image.

Look at any of the "bright" stars in that image. Due to diffraction, they are destroying data that would (theoretically) otherwise be available.

Isn't that the reason why photography "gurus" take diffraction into consideration?

Kind regards,


Gerald.
 
Last edited:

ondebanks

Member
Ok, you've lost me a bit here on the context of the discussion (not on the context of the science - I have a physics degree.)
Sure, we're way OT, but we got here by the usual organic thread drift:
- OP asked about P45+ lifetime
- Graham commented on lifetime effects of sensor and LCD aging by radiation exposure
- I agreed with Graham, but added that HST sensors last well despite intense radiation bombardment
- Erik commented that HST must be a great long telephoto
- I agreed, and showed an example from my own HST work...which reminded me of a time when someone who should have known better criticized NASA for using such large f/numbers, on the basis of the spectre of diffraction.
- and so here we are, talking about space telescope point-spread functions and diffraction, when we should be talking about the lifetime of a particular digital back. :)

You seemed to be inferring earlier with your sarcastic "mustn't shoot at such small f-stops" comment that those who consider such things are somehow missing the bigger picture.
No, I was just joking about the tendency among some "experts" to decry anything slower than about f/16 as a no-go area because of diffraction - Synn put it well, "the diffraction police". As I said, I was involved in a discussion, long time ago, where the HST's large f-numbers needlessly raised eyebrows. They weren't missing the bigger picture - they were missing the context that detail, angular resolution, is determined by physical aperture and not by focal ratio, and that there are two ways to arrive at a slower focal ratio: take a fast lens and stop it down; or take a fast lens and optically amplify its focal length. The first way decreases the resolution that the system is capable of, and that's the only way they were thinking of; but the second way preserves the angular resolution (and the photon collecting area), and that's what happens with telescopes.

Regardless of "chunky pixels", there is no arguing whatsoever about the simple fact that because that image you are sharing was shot at such a small aperture, you are losing a huge quantity of information because of diffraction problems.
When you say "that image you are sharing was shot at such a small aperture", you are falling into the same misconception as the folks I was describing above - thinking with the photographer part of your brain rather than the physicist part. It wasn't shot at a small aperture - not unless you regard 2.4 metres as small. It was shot at a slow f/ratio, but that's a different thing. They could have designed the HST as a much faster optical system, while maintaining the 2.4m entrance pupil aperture constraint...it would resolve neither more nor less detail; it would just require smaller pixels to maintain reasonable PSF sampling, and smaller pixels would be quicker to saturate and generally have proportionally higher readout noise. So, they made it slow, and used large-pixel (15 micron) CCDs.

I'm not referring to the cross here - I'm referring to the fact that you have point sources of light that, due to diffraction issues that are exacerbated by the size of the chosen aperture, are masking data in the image.

Look at any of the "bright" stars in that image. Due to diffraction, they are destroying data that would (theoretically) otherwise be available.
You are absolutely correct that there is theoretically more data/resolution to be had. But it would take a larger diameter telescope to obtain it - and if you were to say, increase the aperture 2x, then the optical surface areas to be meticulously figured increase 4x, the mass and volume increase 8x, the launcher capacity and payload requirements increase by a similar amount, the construction and testing budgets increase probably 10x, and everything takes longer, so launch is delayed by years...and then at the end of it all, you find that even when you jump up and down on it, it doesn't fit in the Space Shuttle's payload bay :facesmack: This makes the phrase "exacerbated by the size of the chosen aperture" inappropriate in this context - it implies that there was an easy choice available to use a larger aperture.

Isn't that the reason why photography "gurus" take diffraction into consideration?
It is the reason, but I hope I've shown why it's wrong to apply photographic diffraction considerations to all imaging contexts.

Cheers,
Ray
 

ondebanks

Member
Hi,

The case is that NASA is not stopping down, they use a barlow lens (tele extender) to increase the focal length. So f-stop is going down as it is diameter of the mirror divided the focal length.

To achieve a good spatial resolution on large pixels they need to have a large image. The angular resolution of the telescope is limited by the diameter of the lens (mirror), but an extended focal length is needed to utilise that angular resolution with fat pixel sensors. I guess that those pixels are large (200 microns?)

Now, why do they have that large pixels? Well my guess is that they try to catch photons from far away which are not very abundant. Increasing the area of a pixel increases the probability of detection.

Now let's assume that pixel size is something like 200 micron. That is 29 times the size of a P45+ pixel, but area would be 29x29 = 841 times larger. So if a p45+ pixel would collect 10 photons the 200 micron sensor would yield 8410 counts. Now, the P45+ has a readout noise of about 10 electron charge, so SNR would be 1, that is a barely usable signal. The sensor on Hubble may have much lower readout noise as it is in a very cold environment like 78K.

Best regards
Erik
Thanks for chipping in, Erik. You are correct on most points. The pixel size is not as big as you thought, and the primary factor deciding pixel size is the PSF sampling rather than flux collection probability (after all, if an incoming photon misses one small pixel, it will hit the one beside it). But you are right that readout noise is lower than in MFD CCDs (5 electrons/pixel for the original workhorse WFPC2 camera, which is impressive for something made in the early 1990s).

Cheers,
Ray
 

ErikKaffehr

Well-known member
Hi,


What is the pixel size?

I was also a bit surprised to read that original sensor have been cooled by nitrogen, as it is quite cold out there. But than I realised that with near vacuum conditions there would not be any cooling by convection.

Nice picture, by the way! Surprised to see it in colour, though!

Best regards
Erik


Thanks for chipping in, Erik. You are correct on most points. The pixel size is not as big as you thought, and the primary factor deciding pixel size is the PSF sampling rather than flux collection probability (after all, if an incoming photon misses one small pixel, it will hit the one beside it). But you are right that readout noise is lower than in MFD CCDs (5 electrons/pixel for the original workhorse WFPC2 camera, which is impressive for something made in the early 1990s).

Cheers,
Ray
 

ondebanks

Member
Hi,

What is the pixel size?
The pixel size is 15 microns for that camera.

I was also a bit surprised to read that original sensor have been cooled by nitrogen, as it is quite cold out there. But than I realised that with near vacuum conditions there would not be any cooling by convection.
Forced cooling of the CCDs is still necessary in space, because HST spends half of each orbit in the full glare of unfiltered sunlight. Internal conduction and radiation would transmit some of the absorbed heat to the sensors.

And even if the average sensor temperature was still very, very cold, any uncontrolled rise and fall in temperature, however small, is undesirable from the point of view of maintaining a stable instrument calibration during science programmes. In other words, just cooling isn't enough - it must be regulated cooling to a set point of temperature. So the WFPC2 was maintained at -88 Celcius.

The visible light cameras (like the WFPC2, ACS, and WFC3) actually don't use cryogenic coolants like nitrogen or helium - they use thermoelectric coolers instead. This gives them an essentially unlimited lifetime. OTOH, infrared cameras (like the NICMOS) require deeper cooling - otherwise the camera detects its own thermal infrared signature as an interfering background! - and that means cryogenics. However, the Helium boils off over time, which sets a limit to the useable lifetime of the camera - unless a servicing mission flies in to replace the dewar.


Nice picture, by the way! Surprised to see it in colour, though!

Best regards
Erik
Thanks! It's in colour because I made a tricolour composite of grey images through 3 different filters. An old technique that dates back to James Clerk Maxwell in 1861.

(And BTW, Ctein's rant about this, while interesting, misses the point: tricolour techniques are not at all confined to literally use red, green and blue filtration - any three spectral bandpasses can be used, from gamma rays down to radio waves).

Cheers,
Ray
 

ErikKaffehr

Well-known member
The pixel size is 15 microns for that camera.

Forced cooling of the CCDs is still necessary in space, because HST spends half of each orbit in the full glare of unfiltered sunlight. Internal conduction and radiation would transmit some of the absorbed heat to the sensors.

And even if the average sensor temperature was still very, very cold, any uncontrolled rise and fall in temperature, however small, is undesirable from the point of view of maintaining a stable instrument calibration during science programmes. In other words, just cooling isn't enough - it must be regulated cooling to a set point of temperature. So the WFPC2 was maintained at -88 Celcius.

The visible light cameras (like the WFPC2, ACS, and WFC3) actually don't use cryogenic coolants like nitrogen or helium - they use thermoelectric coolers instead. This gives them an essentially unlimited lifetime. OTOH, infrared cameras (like the NICMOS) require deeper cooling - otherwise the camera detects its own thermal infrared signature as an interfering background! - and that means cryogenics. However, the Helium boils off over time, which sets a limit to the useable lifetime of the camera - unless a servicing mission flies in to replace the dewar.
I read about nitrogen cooling on the web, nice to have up to date info. Temperatures in near vacuum conditions is a quite interesting subject. :)

Nice to hear about "unlimited lifetime". I guess that the HST is one of the most valuable resources to astronomers.
Thanks! It's in colour because I made a tricolour composite of grey images through 3 different filters. An old technique that dates back to James Clerk Maxwell in 1861.
Well, my thinking was more like that HST time is pretty scarce, I guess. NASA publishes some images for sheer beauty, but I guess that many observations are monochrome. So I guess that if nice multispectral images needs to be shot, there needs to be a scientific need for that.


BTW, It could be nice if you would elaborate a bit more on the PSF/diffraction and pixel size issue.

Best regards
Erik
 
Last edited:
Top