The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Multi Row Panoramas. Biggest issue is D.O.F.

dougpeterson

Workshop Member
That's a very helpful deep dive. My summary below adds nothing new, but might be more accessible for some...

- Flat stitching: The continuous image of the subject you want is already in there projected by the lens; stitching just allows you to see more of it than your sensor size typically allows. No pixels are harmed in the process.
- Nodal stitching: You're capturing distinct views of the subject that software uses to mathematically reconstruct a continuous image. Pixels are stretched and smushed as needed.

Both have pros and cons. For example nodal stitching can be done with ANY camera/lens while flat stitching requires specialized equipment, and can go as wide as you want including 360º. But flat stitching (and this is my subjective and biased opinion) feels more like organically crafting a photograph while nodal stitching feels like some form of detached wizardry summoning an image from a virtual reality using math. I don't fault anyone who feels differently.

*or four quadrants, or 3x2 or whatever tiling
 

dougpeterson

Workshop Member
Tilt-shift lenses absolutely can be used for flat stitching, when you shift left and right (or up and down), you’re effectively mimicking the kind of parallel movement you'd get from a technical camera. And because the sensor stays stationary, you avoid introducing new vanishing points. Stitching those images is usually seamless if everything's aligned and the subject is relatively flat.

Rotating 180º around the sensor axis, though, if you mean flipping the whole camera from one side to the other, can work in some cases, but that’s still a rotation, not a lateral translation. You’re likely to introduce subtle perspective shifts unless everything in the scene is at a uniform distance. So yes, it can work, especially with longer focal lengths and distant scenes, but it’s a bit of a gamble if you’re aiming for architectural precision.

Ideally the lens stays in place and the sensor moves. In a tech camera like an XT that's typically the way things natively work. In a small-format camera with a TS lens you can accomplish a similar thing by mounting the lens to the tripod rather than the camera, if/when the lens natively supports that or if you make a custom mount.

Of course as you mention that matters more when things are close to the camera. It wouldn't matter at all in a shot of a distant sky rise or mountain chain where everything is hundreds of meters away or further and would matter a great deal when shooting an image of a kitchen with a counter in the foreground a couple feet from the camera.
 

Pieter 12

Well-known member
Ideally the lens stays in place and the sensor moves. In a tech camera like an XT that's typically the way things natively work. In a small-format camera with a TS lens you can accomplish a similar thing by mounting the lens to the tripod rather than the camera, if/when the lens natively supports that or if you make a custom mount.

Of course as you mention that matters more when things are close to the camera. It wouldn't matter at all in a shot of a distant sky rise or mountain chain where everything is hundreds of meters away or further and would matter a great deal when shooting an image of a kitchen with a counter in the foreground a couple feet from the camera.
Then there is this lens, but the angle of view limits the width of the panorama:
1749488914745.png
 

tcdeveau

Well-known member
Just skimming this but if you’re doing f11-f16, could you be running into diffraction issues? A lot of lenses I used in the past hit a point of diminishing returns beyond f11. I found too where you focus is important.

I’ve done stitched panos with rotational means (using a rail and finding the nodal point) and the flat stitch method. Flat stitch with the STC was pretty effortless. Nor everything was perfect (even with some tilt) but seeking perfection took some of the fun out of photography and never improved my images.
 

daz7

Active member
If buying a tech cam with a separate digital back is not an option, then you can buy a cheap view camera of any kind and get your camera mounted to a sliding adapter on the rear standard, with the lens on the front one and use flat stitching anytime you want.
I am not sure how deep the sensor in a Fuji cameras is- if really deep you may lose some of the sensor area but then gain on the total area stitched anyways.
 

MGrayson

Subscriber and Workshop Member
The flashlight metaphor makes all sorts of things simpler. Why are the stitched images in the nodal panorama weirdly shaped? Because a flashlight beam NOT aimed directly at a building will spread as it goes down the block. A beam shaped like your sensor will illuminate exactly the deformed quadrilateral you see in the stitching program.

For those unfortunate enough to remember conic sections: If your sensor were round, the flashlight beam would be a cone, and the off-center illuminated parts of the building facade (or mountain range) would be ellipses, widening to a parabola, and then a hyperbola as you aim further and further down the street/valley. The total possible image of an infinite building would lie between two hyperbolas (a smile rising above the top of the central frame and a frown extending from its bottom). This is the bat-wing shape you see before cropping the panorama to rectangular. If you're still with me, the space illuminated by a rotating flashlight is the volume between two vertical cones of darkness - one over your head, one under your feet (looking very much like the light cones in relativity). Those cones intersect the wall in those two hyperbolas.

Look! Diagrams!

Here's the view from above showing the FoV of your lens at three different positions. Note the larger coverage as you aim to the sides, even though you think you're seeing a normal view. (Every other frame shown for unclutteredness.)


What a round flashlight illuminates as you swing it across the vista (those are supposed to be mountains in the distance).


What your panorama photos look like in the stitching software. You can sort-of imagine a photographer standing in the middle of all those windows arranged in an arc around them. This is a very wide-angled view. All those windows are the same size and shape!


Disclaimer: Don't be fooled because I used Mathematica to do all this. The actual shapes and their locations (except for the hyperbolas and the mountains, which were randomly generated) were done by hand. But trust me, they are morally correct.

Matt
 
Last edited:

Precision

Active member
a good view camera that features full rear standard movement solves most all of these issues - the only real hassle is weight /bulk. A Sinar F or Norma is not terrible heavy and breaks down small. If you are using a CMOS equipped digital back, then skip the slider and just use a plate for the back on the rear standard, and use the rear standard rise and shift to stitch as much as you like - the image circle for any lens intended for 4x5 film allows a lot of moving around of the back, but the lens stays in the same place.

it is sort of possible to use a camera body and adapter plate (I tried a Canon 5DS with a plate from justtogether.de - about the best option with the shortest additional flange) . With that plate, any lens 90mm or longer is usable (WA lenses cannot hit infinity focus because of the body depth)- the issue is that if using a body, the lens flange of the camera body tends to vignette if you move very far off axis. A Digital back doesn’t have that issue…but if you go into more extreme shifts on the back, you should capture LCC frames for each frame as well as the light is hitting the cells in more oblique angles. CCD backs are their own special problem because of the lack of usable live view…then you do need a slider (Which you can also move around with the rear standard rather than moving the slider)

i tried a rented IQ4-150 to try this on my olde Sinar Norma and a Super Angulon 90mm f8 - and with back shifts, I was able to do a 9 image stitch image (3x3 matrix) . Resulting image was huge, something like 1.1 gigapixel…at 300 dpi it was enormous -I’d have to go into the archives and find it but IIRC like 20’x30’ . My computer sobbed inconsolably in the corner after the processing was done, I learned about the big file format for Photoshop, and I had a completely impractical for any commercial use gigapixel image.
 

cunim

Well-known member
A telecentric lens does not have parallax error and it is parallax that makes closer images (like my aircraft above) difficult to photograph. Because of parallax, lenses alter magnification with distance. That creates the geometrical distortions that we see and that is why it is so hard to make close up panoramas.

I don't stitch, necessarily, to get a wider field of view. I stitch to achieve the perspective I want. That might be more normal or more compressed than it would be with a wide angle lens, so I swap in something longer and stitch. I also dislike the aberrations of wide angle lenses but that's another topic.

The good news is that a telecentric lens would let us create wide angle images with normal geometry. So why are these lenses limited to machine vision, metrology, microscopy, astronomy, etc? There are some minor issues, like fixed focus. The really bad news is that a telecentric lens needs to be as wide as the ray bundle being imaged. But if I could get such a lens the size of an aircraft, I wonder what the images would look like?

Time for a kickstarter?
 

darr

Well-known member
The flashlight metaphor makes all sorts of things simpler. Why are the stitched images in the nodal panorama weirdly shaped? Because a flashlight beam NOT aimed directly at a building will spread as it goes down the block. A beam shaped like your sensor will illuminate exactly the deformed quadrilateral you see in the stitching program.

For those unfortunate enough to remember conic sections: If your sensor were round, the flashlight beam would be a cone, and the off-center illuminated parts of the building facade (or mountain range) would be ellipses, widening to a parabola, and then a hyperbola as you aim further and further down the street/valley. The total possible image of an infinite building would lie between two hyperbolas (a smile rising above the top of the central frame and a frown extending from its bottom). This is the bat-wing shape you see before cropping the panorama to rectangular. If you're still with me, the space illuminated by a rotating flashlight is the volume between two vertical cones of darkness - one over your head, one under your feet (looking very much like the light cones in relativity). Those cones intersect the wall in those two hyperbolas.

Look! Diagrams!

Here's the view from above showing the FoV of your lens at three different positions. Note the larger coverage as you aim to the sides.


What a round flashlight illuminates as you swing it across the vista (those are supposed to be mountains in the distance).


What your panorama photos look like in the stitching software. You can sort-of imagine a photographer standing in the middle of all those windows arranged in an arc around them. All those windows are the same size and shape!


Disclaimer: Don't be fooled because I used Mathematica to do all this. The actual shapes and their locations (except for the hyperbolas and the mountains, which were randomly generated) were done by hand. But trust me, they are morally correct.

Matt

Outstanding diagrams, Matt. Your illustrations demystify the image-stitching workflow with impressive clarity for visual learners (like me). They offer an excellent breakdown of spatial alignment and overlap strategy. Much appreciated!
 

MGrayson

Subscriber and Workshop Member
Outstanding diagrams, Matt. Your illustrations demystify the image-stitching workflow with impressive clarity for visual learners (like me). They offer an excellent breakdown of spatial alignment and overlap strategy. Much appreciated!
Thank you, Darr.

I added a bit to one of the diagrams to help distinguish what you see while capturing off-center frames vs. what they look like in the final panorama. I may be beating a dead horse, but I can't resist a good any graphic. :ROFLMAO:

Matt
 

Pieter 12

Well-known member
The flashlight metaphor makes all sorts of things simpler. Why are the stitched images in the nodal panorama weirdly shaped? Because a flashlight beam NOT aimed directly at a building will spread as it goes down the block. A beam shaped like your sensor will illuminate exactly the deformed quadrilateral you see in the stitching program.

For those unfortunate enough to remember conic sections: If your sensor were round, the flashlight beam would be a cone, and the off-center illuminated parts of the building facade (or mountain range) would be ellipses, widening to a parabola, and then a hyperbola as you aim further and further down the street/valley. The total possible image of an infinite building would lie between two hyperbolas (a smile rising above the top of the central frame and a frown extending from its bottom). This is the bat-wing shape you see before cropping the panorama to rectangular. If you're still with me, the space illuminated by a rotating flashlight is the volume between two vertical cones of darkness - one over your head, one under your feet (looking very much like the light cones in relativity). Those cones intersect the wall in those two hyperbolas.

Look! Diagrams!

Here's the view from above showing the FoV of your lens at three different positions. Note the larger coverage as you aim to the sides, even though you think you're seeing a normal view. (Every other frame shown for unclutteredness.)


What a round flashlight illuminates as you swing it across the vista (those are supposed to be mountains in the distance).


What your panorama photos look like in the stitching software. You can sort-of imagine a photographer standing in the middle of all those windows arranged in an arc around them. This is a very wide-angled view. All those windows are the same size and shape!


Disclaimer: Don't be fooled because I used Mathematica to do all this. The actual shapes and their locations (except for the hyperbolas and the mountains, which were randomly generated) were done by hand. But trust me, they are morally correct.

Matt
I guess what I find confusing about the diagrams is the perspective introduced by the rendering. In reality, all the rectangles would be the same size and whatever distortion is recorded is because of the relative distance of the subject to the sensor plane. Can that be illustrated? An example of actual frames maybe?
 

darr

Well-known member
I guess what I find confusing about the diagrams is the perspective introduced by the rendering. In reality, all the rectangles would be the same size and whatever distortion is recorded is because of the relative distance of the subject to the sensor plane. Can that be illustrated? An example of actual frames maybe?


You're right that, in reality, all the frames are identical rectangles—what the diagrams show isn’t the sensor shape but how the field of view expands as the camera rotates. This is a conceptual way to explain how each image contributes to the final stitched panorama, especially with the perspective shift at the edges.

The distortion occurs during stitching, when software maps all those identical frames into a curved projection space (cylindrical or spherical), which leads to the visual warping shown. An example of an overlay of frames on a stitched pano would help clarify that, and the best way to grasp this is to get out and shoot. Try a basic pano, drop it into stitching software, and watch it come together. The experience teaches you way more than theory ever could, and you'll probably enjoy it, too.
 

darr

Well-known member
I want to thank Matt again for the excellent explanation, well-crafted diagrams, and the other contributors in this thread. This thread is a gem for anyone trying to get their head around pano stitching. It’s clear, visual, and grounded in real-world experience.

To anyone still scratching their head over stitching: remember, diagrams and theory are great, but they won’t take the shot for you. At some point, you’ve got to get out there, press the shutter, and maybe curse a little when things don’t line up. Try the technique. See how your gear behaves. Tweak, fail, repeat. That’s how you learn, and it can be fun, especially when you can start to predict the results.

Too often, I see photographers dismiss good information without ever testing it in the field. This post is an excellent learning tool—use it as a launchpad, not a roadblock. Just my two cents from someone who’s been teaching photography for over 30 years and knows that working with gear in real situations is how solid image-making skills are built. Go shoot!
 

MGrayson

Subscriber and Workshop Member
Urk. Ok. Here's the scene... This is a building made of green cubes (don't ask) with a black strip representing a road in front of it, and a large pink wall behind it because .. what building doesn't have a large pink wall behind it. The red alien creature represents the camera location with its three spindly legs. Oh, I added a pink door se we could see the center line when we rotate the camera,


Here's the picture we WANT to take:



For reasons I don't understand, Mathematica exported this with all the inner cube edges. I just wanted some texture.

But here's what we get with our lens aimed dead center


So we aim left and right and get

and its mirror image


Notice how much more vertical real estate we cover when we aim to the sides. Since we WANT a final image with the building of uniform height, all that vertical stuff gets expanded. This is the rays from the camera spreading out as they move farther away.

Here is the actual panorama as assembled by LR. I added frame boundaries in PS.


Matt

And Darr is right. Go shoot! Theory is fine ... in theory.
 
Last edited:

tenmangu81

Well-known member
I am a former physicist, but experimentalist, and I a hundred per cent agree with Darr. You must learn some theoretical and useful bases (thanks Matt !) but don't underestimate practice. You learn photography with a camera in hand, and you don't manage quite well your camera before having taken hundreds of pictures.
This thread is very instructive, thanks to all contributors. I've learnt a lot ! And it's probably not over....
 

MGrayson

Subscriber and Workshop Member
I can't stop. Someone help me!

"Behind the scenes" shot of camera and three windows corresponding to the three shots in our pano. Distortion of the outer frames (not their content!) shows clearly.
Oh, and I hope you noticed that I replaced the stack of cubes with a thin layer of square windows and a solid green box behind them. :rolleyes:


View to the left - frame undistorted, content squished.


View on center - no distortion


View to the right - frame undistorted, content squished.


I'll stop. Any day now. I can stop whenever I want. (Kidding aside, I'm 6 1/2 years sober and understand addiction FAR too well. There are posts I made back then that I really wish I hadn't. I was right, of course, :ROFLMAO: but my manner was regrettable. Sincere apologies to everyone involved.)

Matt
 
Last edited:
Top