The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

More on Diffraction - Why is it There?

MGrayson

Subscriber and Workshop Member
This is a pure lecture post. If I were modern, I'd have a YouTube channel and put it there. But I'm old fashioned and like pictures. I want to look at what happens inside your camera at time steps of a few trillionths of a second. I mean, who wouldn't? The time it takes light to leave the rear element of the lens and smack into your sensor is about a ten billionth of a second. All the focusing and diffraction is done by the time the shutter has been open for another ten billionth of a second (fast shutter!), and the ensuing 1/1000 second exposure lets in a column of light 300 kilometers long. Those extra million nanoseconds do pretty much the same thing that the first nanosecond did, so we'll ignore them.

In the interest of visibility, I'm going to hugely increase the wavelength of the light. Instead of half a micron (5,000 Å), I'm going to use about 2.5mm, or 5,000 times as long - we're talking 120 GHz. Above WiFi frequencies, but not hugely. As you no doubt recall from our previous diffraction discussion, the longer the wavelength, the bigger the blob, so we'll hit diffraction limits at pretty big apertures.

What will be our subject? A point of light. A star. Exciting, huh? Light waves radiate from this star in concentric spheres, but by the time they get here, the spheres might as well be planes. Over the surface of your front element, that sphere will deviate from planar by a number with 29 zeroes after the decimal place. Flat. At this point, you might be asking yourself "Why do they stay flat?". Well, as anyone who has used a long lens or a telescope can tell you, they DON'T stay flat. Because air sucks! But why is it even remotely flat? How does the light know to DO that?

We think of light as going straight through transparent stuff like air and glass, maybe with some bending, but it isn't that simple. Light particles (Einstein was once offered an academic job DESPITE the fact that he believed in these ridiculous things called photons) travel about a millionth of an inch through air, and much less through glass, before they "interact" with the stuff they're going through. And every time they do, there's a chance of that light rushing off in some new direction. Thanks to Quantum Mechanics, we can say that the light is running off in ALL directions at once. So what keeps the waves lined up? Think about a dozen or so teenage boys standing in a line in swimming pool. If one of them jumps up and down, waves will radiate from him in all directions. If all of them jump up and down in sync, waves will spread out from them in a line, at least away from the ends of the line of boys. Why? Because if you are facing them from 5 feet away and take a step to your right, the line of boys will look the same. At least the ones close enough to you to be causing waves. The whole line parallel to the boys will be waving up and down in sync. This will not be true if you are near the end of the line (dramatic foreshadowing music).

Of course, we can do the math, observe that the Wave Equation (amazingly, the equation for waves) is linear, and so sums of solutions are solutions, and ... but I was hoping that the image of rowdy teenagers would be more compelling.

So to figure out what is happening when light first enters your camera (remember the camera?) we need to know what one boy jumping up and down does to the water, figure out where all the boys are, and add up their waves. As anyone who has thrown a pebble out into a still pond knows, circular ripples radiate outwards. A bit of physics tells us how the heights of the ripples change as they spread, and that's what we use for our one boy jumping, or, equivalently, light hitting one point at the entrance to our camera.

This is going to be long, and the pictures are about to start... (end of part 1)
 
Last edited:

MGrayson

Subscriber and Workshop Member
The first and simplest case is where we forgot to put the lens on. We take the shot with a big hole in front of the sensor. What should happen? Waves of light flood in and expose every pixel evenly. This is an actual computer simulation, not just me drawing the answer. Light comes in the top and hits the sensor at the bottom. We're looking down on the space between the lens mount and the sensor. Flange distance is 20mm, Sensor width is 44mm.



As you can see, light is hitting the sensor everywhere evenly. Not a great Star image.

What happens if we close down the aperture?



A circle of light hits the sensor, as we'd expect, but there's some stuff going on out the fringes. Those are the boys jumping up and down at the end of the line. All we've done between the last image and this one is remove the boys from the outer parts of the line.

Great! As the aperture gets smaller, our circle gets smaller. Pinhole cameras here we come!
But as the aperture gets smaller still, the "off the end" stuff gets more important.


If we go even smaller, it gets much worse. Diffraction is just the word for what the missing boys in the line do to our waves.


I promised a few trillionths of a second at a time, but went to a full tenth of a nanosecond here, because nothing much happens other than what you see. When we introduce the lens, things will get radically different because the ends of the lines are closer to the sensor than the middle!

to be continued...
 
Last edited:

MGrayson

Subscriber and Workshop Member
When you focus on a star (oh god, that song...:eek:) the light waves have to warp so that they're going to ... focus. The planar sheet of waves must change into a spherical sheet (a circle in our diagrams) centered on the middle of the sensor. We'll start with an f/1 lens, because who doesn't do astronomy at f/1?

Light is coming off the rear element of the lens and 11 trillionths of a second have passed. The waves start to advance! Yeah, the wave is being reflected backwards, too, but that part won't affect our region of interest. Just ignore everything above the frown. But you can see the circular arc of the wave focusing down as it should (computer simulation, remember, not me making it up. Really!)


If you look closely, you can see the circles from the ends of the aperture radiating inwards.

Another dozen picoseconds later, the waves are more than half-way to the sensor. ᕕ( ᐛ )ᕗ But now we see something unpleasant. The stuff from the ends - the edge of our aperture - will hit the sensor BEFORE the waves from the lens do. Still, our focusing wavefront looks to be on target.


First contact with the sensor. Less than 50 picoseconds in and our diffraction circles have hit first. If we had the world's fastest global shutter and stopped the exposure now, we'd see a large ring on the sensor.


FINALLY, our star image hits the sensor.


You can see the standard blob shape, but it's from our absurdly large wavelength and not so much a diffraction effect. This is an f/1 lens, after all!

And if we wait forever - another 10 billionth of a second, we get the full interference pattern


Another view so you can see the standard Airy Disk profile.


Note: The interference and Airy Disk are not due to the curvature of the wavefront. In fact, Airy's solution is for diffraction of the flat wavefront. I just thought the image of the diffraction effects preceding the image itself were surprising and beautiful.

What happens if we make the aperture smaller? Tune in next time....
 
Last edited:

MGrayson

Subscriber and Workshop Member
We go to f/8 What becomes of our star? A few picoseconds in and


Running the film at high speed... Still focusing...


Hmm.. Not focussing so much now, is it...


Splat. Frankly, we'd have done better with the sensor 10 mm closer to the lens!


And after the full ten billionth of a second, pitiful


Well, that's all I wanted to say. I hope you enjoyed the first 100 picoseconds inside your camera.

Best,

Matt
 
Last edited:

MGrayson

Subscriber and Workshop Member
@MGrayson - I enjoy your lectures!
You have a knack for making unintuitive concepts intuitively easy to understand!

Thank you for taking the time to do this!

Anwar
Anwar,

Thank you. Shining light (pun unintended) on technical matters is my personal and professional passion. My first teaching job was at MIT, and the kids did a parody of my lecturing style - something along the lines of "and so we see that 2 + 3 = 5 ... (pause) ... But what's REALLY going on is ...." Apparently I haven't changed much in 40 years. :LOL:

Matt
 

cunim

Well-known member
I once had dinner with a lady named Einstein. She was in fact, a direct descendent and told a story of AE's funeral during which a niece gave the eulogy. She said that uncle Albert tried to play in a chamber music group at Princeton, but his timing was lousy. In photography, we have the opposite problem. Our practical skills may be pretty good, but our understanding of the theoretical fundamentals tends to be lousy.

@MGrayson, thanks for taking the trouble to try helping us with that.
 
Last edited:

MGrayson

Subscriber and Workshop Member
I once had dinner with a lady named Einstein. She was in fact, a direct descendent and told a story of AE's funeral during which a niece gave the eulogy. She said that uncle Albert tried to play in a chamber music group at Princeton, but his timing was lousy. In photography, we have the opposite problem. Our practical skills may be pretty good, but our understanding of the theoretical fundamentals tends to be lousy.

@MGrayson, thanks for taking the trouble to try helping us with that.
You're most welcome! I am the first, though, to admit that a good eye is worth more than all the technical understanding in the world. Well, that overstates the case, but I'd rather have great photos than understand perfectly why they're bad. :ROFLMAO:

(I heard that story with one of the other violinists complaining "Albert, you can't count!" :unsure:)
 

tenmangu81

Well-known member
@Matt
Thanks a lot for your very intuitive post !! You are very good at explaining difficult concepts in very simple ways.
I guess the ordinates in your graphs are the dimensions of the sensor in mm. Am I right ? Very nice description indeed.

I have a question, however. I would expect the diffraction figures to have a rotational symmetry, according to the shape of the diaphragm. If the diaphragm has, for instance, 8 lamellas, you should get an Airy pattern with an 8 fold symmetry, i.e. 45°. And this is what we get, for instance, from point lights shot in the night : we have an 8 rays star.
But maybe I am completely wrong and I misunderstood your point....
 

tenmangu81

Well-known member
Ahhhh !! OK !! I understand now after having read more carefully. You are just simulating the propagation of a wave as a function of time, and not a steady state situation.
So many picoseconds have passed before I understood....
If I want to get what would be the result of a diffraction, as an optical effect (defect) related with the aperture, I should consider not only a plane, as you do in your demonstration, but should introduce the shape of the hole (diaphragm) by rotating around a vertical axis in the plane of my (your) display. Right ?
 

MGrayson

Subscriber and Workshop Member
Ahhhh !! OK !! I understand now after having read more carefully. You are just simulating the propagation of a wave as a function of time, and not a steady state situation.
So many picoseconds have passed before I understood....
If I want to get what would be the result of a diffraction, as an optical effect (defect) related with the aperture, I should consider not only a plane, as you do in your demonstration, but should introduce the shape of the hole (diaphragm) by rotating around a vertical axis in the plane of my (your) display. Right ?
Oh, yes. I'm taking a slice through the middle. The whole picture needs to be rotated around the y-axis. My purpose was to show how the evolution of the wavefront leads to the well-known diffraction pattern - a spot with fringes.

Take this image


Here's a volumetric shading of it in 3D. The focusing wavefront is the orange center. The blue haze are the diffraction waves. We're standing behind the sensor looking towards the lens and a bit down.


Here's the final image from that series showing the Airy pattern on the sensor plane.


These are pretty, and the movie would be cool, but I didn't think they would be as easy to interpret. Oh well, it's only electrons. I'll make the movie....

Matt
 
Last edited:

SeanThekayaker

New member
Anwar,

Thank you. Shining light (pun unintended) on technical matters is my personal and professional passion. My first teaching job was at MIT, and the kids did a parody of my lecturing style - something along the lines of "and so we see that 2 + 3 = 5 ... (pause) ... But what's REALLY going on is ...." Apparently I haven't changed much in 40 years. :LOL:

Matt
That is most amusing!
 

SeanThekayaker

New member
I once had dinner with a lady named Einstein. She was in fact, a direct descendent and told a story of AE's funeral during which a niece gave the eulogy. She said that uncle Albert tried to play in a chamber music group at Princeton, but his timing was lousy. In photography, we have the opposite problem. Our practical skills may be pretty good, but our understanding of the theoretical fundamentals tends to be lousy.

@MGrayson, thanks for taking the trouble to try helping us with that.
Oh, yes. I'm taking a slice through the middle. The whole picture needs to be rotated around the y-axis. My purpose was to show how the evolution of the wavefront leads to the well-known diffraction pattern - a spot with fringes.

Take this image


Here's a volumetric shading of it in 3D. The focusing wavefront is the orange center. The blue haze are the diffraction waves. We're standing behind the sensor looking towards the lens and a bit down.


Here's the final image from that series showing the Airy pattern on the sensor plane.


These are pretty, and the movie would be cool, but I didn't think they would be as easy to interpret. Oh well, it's only electrons. I'll make the movie....

Matt
Agree with Cunim and the others. This was a fascinating lecture. Thank you for sharing and the forum is lucky to have you. The initial images/graphical representation also caused me to think about what happens when light hits our retinae. It also made me think about the wave forms of tsunamis from above. You also made me appreciate how much our primate brains are capable of astonishing levels of abstraction — dealing with such immeasurable quantities of time (assuming time exists and really can be quantified). Thank you and I hope that we get more lectures in the future.
 

P. Chong

Well-known member
Thanks Matt for the great lecture. Love it!

As the behaviour of the diffraction is known, would it be possible to develop a function to cancel it, like noise cancellation headphones?
 

MGrayson

Subscriber and Workshop Member
Thanks Matt for the great lecture. Love it!

As the behaviour of the diffraction is known, would it be possible to develop a function to cancel it, like noise cancellation headphones?
Unfortunately not. My recollection is that the transform method would involve zeros in the denominator. Let me dig into it a bit more, or someone more knowledgable can speak up.

Matt
 

anwarp

Well-known member
Thanks Matt for the great lecture. Love it!

As the behaviour of the diffraction is known, would it be possible to develop a function to cancel it, like noise cancellation headphones?
Isn’t the diffraction correction in the Capture One lens tool trying to do that with some sort of deconvolution based sharpening?
 

MGrayson

Subscriber and Workshop Member
Isn’t the diffraction correction in the Capture One lens tool trying to do that with some sort of deconvolution based sharpening?
Warning: This answer has not been cleared by the Bureau of Gibberish. I think it's all true, but it's not meant to be understandable. Apologies in advance.

Deconvolution is dividing the fourier transform of the image by the fourier transform of the blur and then inverse-transforming the result. The problem (I think) is that the Airy disk transform has zeroes and so you can't divide by it. I imagine that you could take something close that avoids the zeroes, but that would introduce other artifacts.

Then there are the practical considerations. For instance, it is theoretically possible to undo gaussian blur, but the process will greatly accentuate any noise. (This is equivalent to flowing the heat equation backwards in time, and that has no solutions for most initial conditions.) Blurring greatly reduces higher frequencies, so unblurring means accentuating them again. The gaussian is its own fourier transform (suitably scaled), so deconvoluting means dividing by Exp[-*x^2], which is multiplying by Exp[x^2], and that grows FAST (constants suppressed for clarity).

Yeah, the Airy disk transform is zero outside a range, which gets larger as your aperture gets larger (no surprise). So you can't divide by it to de-convolute. If you look this up online, you'll find the words "Autocorrelation of the aperture", which is the overlap of the aperture with itself translated some distance. Once that distance is greater than the diameter of the aperture, then the overlap is zero. Knowing how much to trust random people on the internet (present company excepted, of course) I also asked Mathematica nicely to perform the transform and got
Ungodly Mess * (1 + UnitStep[-2 \[Pi] - x] (-1 + UnitStep[x]) + UnitStep[x] (-2 + UnitStep[-2 \[Pi] + x]))
And that vanishes for |x| > 2*Pi, as it should.
 
Last edited:
Top