The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Multi Row Panoramas. Biggest issue is D.O.F.

wallpaperviking

Active member
I have been trying out shooting some multi-row panoramas and I thought my biggest issue would be movement in between frames or maybe even software stitching issues due to parallax.

It turns out the biggest issue has been trying to get a sharp image from front to back..

I can see why so many images seem to be taking from quite a distance away, where D.O.F is less of an issue.

I have also been playing around with "Brenizer" or "Bokehpanoramas" and have used the GFX 110mm on a 100S for this. I naively thought I could just use this lens for landscapes as well, by stitching a wider F.O.V but the shallow depth of field of a slightly telephoto lens prevented this from happening. The stitched images have been pretty normal scenes, not distant landscapes but also not with something really close in the foreground either. I think I really underestimated this issue. Even doing a 9 image stitch (3x3) with a 70mm lens presents issues with D.O.F. This gives a moderate 40mm or so F.O.V.

Does this ring true for others? Outside of doing focus stacking, anything I am missing here? Focus stacking is out, as the combination of stitching and stacking is just too much, with too much to go wrong.
It seems like a bit of a Catch 22, you need to stop down the lens more to get better D.O.F, which leads to diffraction, which makes you wonder if stitching is even worth it.. Basically getting a larger image that is slightly less sharp than a smaller mpx image.

How do people do stitched interiors this way? Is it always on the wider side, where obtaining sharpness from front to back is less of an issue?

Thanks, I would be grateful to hear from anybody who has had a similar experience and how you got around it...

Thanks :)
 

darr

Well-known member
I’ve never run into the depth of field issues you mentioned when stitching images. I usually use normal to telephoto lenses for this kind of work. Below are two stitched images, each made from three frames, and as you’ll see, I like to include foreground foliage in my landscapes. Neither image was focus stacked.

If you’re aiming for deep focus front to back, skip the bokeh and Brenizer-style approaches, they’re built for shallow depth of field, not depth. Instead, get to know your lenses. Run some field tests to figure out where each one really shines in terms of sharpness and depth. That’s the key to maximizing DOF in stitched work.



ALPA STC + SK 72 + CFV II 50c



Hasselblad 907x + 45p

Can you describe what your shooting checklist is when creating a stitched image?
 

MGrayson

Subscriber and Workshop Member
I’ve never run into the depth of field issues you mentioned when stitching images. I usually use normal to telephoto lenses for this kind of work. Below are two stitched images, each made from three frames, and as you’ll see, I like to include foreground foliage in my landscapes. Neither image was focus stacked.

If you’re aiming for deep focus front to back, skip the bokeh and Brenizer-style approaches, they’re built for shallow depth of field, not depth. Instead, get to know your lenses. Run some field tests to figure out where each one really shines in terms of sharpness and depth. That’s the key to maximizing DOF in stitched work.



ALPA STC + SK 72 + CFV II 50c



Hasselblad 907x + 45p

Can you describe what your shooting checklist is when creating a stitched image?
Darr,

Wow. I was going to go on about "effective apertures" and other nonsense, but "get to know your lenses" (instead of just their specs) is the best advice I've heard in a long time.

Matt
 
Last edited:

jng

Well-known member
The short answer: what @darr wrote.

The longer and potentially more expensive answer: as you have observed empirically, depth of field is shallow(er) since your effective sensor size is larger. I understand reluctance to stop down due to diffraction but (1) since magnification to final image size (whether web or print) will be less than a single shot taken with a wider lens, from a practical perspective this may be a non-issue and (2) if using Capture One, diffraction correction does a pretty good job in mitigating effects of diffraction without introducing undesirable artifacts. Another way to increase effective depth of field to bring both foreground and background into focus is to dial in a bit of tilt. I've employed both (stopping down + tilt) with satisfying results (for example). Since you are shooting with Fuji, Dante suggests you consider their 110mm tilt/shift lens.
:ROFLMAO:

John
 

dougpeterson

Workshop Member
For large depth of field stitching scenarios, if you have the option to switch to a tech camera and stitch by moving around inside the image circle then you'll find the entire world of stitching becomes much more enjoyable and much less finicky. An XT IQ4 150mp with a 32HR and a two-shot stitch gives you ~250mp of detail and with option to tilt/swing the effective depth of field for most* scenes is great.

In my opinion any stitching past four frames moves away from photography and into a world of computer imaging that I personally find less satisfying and usually more work than it's worth.

*Tilt doesn't help if you have things both far away, close and high, and close and low in the frame – it doesn't actually increase depth of field; it just lets you shape that depth of field to be in alignment with your subject. So for example it's great for landscape where there are things are low and close and also far away it is VERY powerful but for a kitchen with hanging cabinetry close to the camera it won't help (though in that case swing may help if you're at an angle to that content.
 

Shashin

Well-known member
How are you determining the DOF? Are you printing the image out and judging it from normal viewing distances or are you doing this at 100% monitor view? If you are looking at 100%, then you do not have a real world viewing condition and the DOF will appear to be less. DOF is a perceptual quality based on the viewer, not one inherent in the image.
 

wallpaperviking

Active member
Thanks for everybodys reply, much appreciated! :).

Just to be clear, I am not using the "technique" of a Brenizer to shoot landscapes. I have been shooting a number of Brenizer portraits with the GF 110mm @ f2 and I thought I could get away with using the same lens at working landscape apertures ( f11 - f16 ) for landscapes and stitching to get a wider F.O.V.

I guess it is a tricky one, as there are so many variables.. I have been viewing the stitched image at 100% and have generally been disappointed. I guess it is just a matter of picking different scenes or starting with a wider lens.

@dougpeterson " In my opinion any stitching past four frames moves away from photography and into a world of computer imaging that I personally find less satisfying and usually more work than it's worth." This is a very valid point I think.. Too many stitches and/or technical components in terms of making an image, really does get in the way..

Also, your example produces a very wide F.O.V and I would have better luck even doing a Pano stitch with such a wide lens to start with..


@jng Tilt does not work in this scenario as I am not stitching within the image circle of the lens.. I have tried stitching an image with tilt applied and the software struggled. Very different if doing a flat stitch, like in your example.

Thanks, I appreciate your help and input! :)
 

Focusrite

Member
Troubles with stitching panoramas is largely what drove me towards medium format in the first place. I wanted 10,000 pixel-wide panos as a standard in case I wanted to print big in the future. Depth of field was a big tripping point for any lens over 50mm, and it simply became a huge hassle and usually not worth it. I never did figure out a simple workflow to deal with it, and nearly every pano became a chore to process.

So I upped the resolution and simply crop now for much of what I do for wide angle. And it is so much simpler! Otherwise I have a tilt/shift adapter for Pentax 645 lenses which gets me a neat 2:1 pano with two exposures on my Fujifilm GFX and the use of tilt which comes in handy for depth of field.

I still use my nodal mount occasionally, but for more specialised shots where the view is wider than what my lens can shoot or for astro/Milky Way panoramas where a longer fast prime can be advantageous.

My only suggestions would revolve around whether you are using a dedicated nodal mount for your panoramas and what software you might be using to stitch them. However upgrading these will only make panos less of a chore.

I am a bit puzzled as to why a lens with tilt applied would create issues with stitching. I would have thought any perspective change with tilt applied would be uniform and repeated in every exposure and therefore not problematic with stitching? Finding the nodal point would be an interesting challenge, but unless you are doing multi-row panos I expect you could get it close enough.

One day I guess we'll get in all-in-one software that automatically focus stacks, exposure blends, stitches, and corrects parallax errors via Generative Fill all in one operation.
 

jng

Well-known member
I am a bit puzzled as to why a lens with tilt applied would create issues with stitching. I would have thought any perspective change with tilt applied would be uniform and repeated in every exposure and therefore not problematic with stitching? Finding the nodal point would be an interesting challenge, but unless you are doing multi-row panos I expect you could get it close enough.
This was my thinking as well, although I've never tried rotational stitching with tilt myself. That said, I do think flat stitching is a much simpler and cleaner solution, provided, of course, you have the equipment for doing so.

John
 

wallpaperviking

Active member
Troubles with stitching panoramas is largely what drove me towards medium format in the first place. I wanted 10,000 pixel-wide panos as a standard in case I wanted to print big in the future. Depth of field was a big tripping point for any lens over 50mm, and it simply became a huge hassle and usually not worth it. I never did figure out a simple workflow to deal with it, and nearly every pano became a chore to process.

So I upped the resolution and simply crop now for much of what I do for wide angle. And it is so much simpler! Otherwise I have a tilt/shift adapter for Pentax 645 lenses which gets me a neat 2:1 pano with two exposures on my Fujifilm GFX and the use of tilt which comes in handy for depth of field.

I still use my nodal mount occasionally, but for more specialised shots where the view is wider than what my lens can shoot or for astro/Milky Way panoramas where a longer fast prime can be advantageous.

My only suggestions would revolve around whether you are using a dedicated nodal mount for your panoramas and what software you might be using to stitch them. However upgrading these will only make panos less of a chore.

I am a bit puzzled as to why a lens with tilt applied would create issues with stitching. I would have thought any perspective change with tilt applied would be uniform and repeated in every exposure and therefore not problematic with stitching? Finding the nodal point would be an interesting challenge, but unless you are doing multi-row panos I expect you could get it close enough.

One day I guess we'll get in all-in-one software that automatically focus stacks, exposure blends, stitches, and corrects parallax errors via Generative Fill all in one operation.
Thanks, I have a Fuji GFX 100S and I am probably just going to stick to single shot captures with it. I am using a dedicated Nodal Ninja multi row setup and it works as advertised. I generally have no trouble with Autopano Giga ( is currently free ) stitching images together. It is the D.O.F that I am struggling with.

With regards to using tilt and then stitching again, I could try this again but i know I have done it at extreme angles and the software struggled. For one, the amount of tilt changes between shots and so I imagine the no parallax point does as well... I guess if everything was stopped down to f16 or thereabouts, then maybe it is not an issue.. As Doug pointed out, this does not help in certain scenes and particularly with a longer lens like the 110mm. Also, if you are doing this as a multi row pano, it is quite hard to get the tilt where you want it, over a number of frames.. It can be tricky enough to do it in a single frame.. ;)

Thanks, I appreciate your reply.
 

wallpaperviking

Active member
This was my thinking as well, although I've never tried rotational stitching with tilt myself. That said, I do think flat stitching is a much simpler and cleaner solution, provided, of course, you have the equipment for doing so.

John
Flat stitching is much nicer. I just don't have this setup unfortunately..

Thanks :)
 

daz7

Active member
you will struggle with tilts if you use paralax stitching. When you turn your camera around, the lens position chages and the tilted areas between the edges wiill not fit the previous frame - take a look at individual photos you are trying to stitch - there will be a visible difference between them. I really doubt that would be correctable by any sotware available, including AI. To use tilt to your advantage when stitching you need to keep the lens at the same position at all times and only slide the sensor within the image circle. To do so, you will need either a tech cam or a view camera.
 

cunim

Well-known member
This is a 3 panel pano (16K pixels) with a tilted POF lying along the top of the fuselage. The engine cowls are sharp, but the landing gear are not.

Not much of a photo but, as a technical exercise, it shows that type of image that tilted flat shifting works with. I can't remember what the lens was but the geometry is a bit wide so I suspect it was my 40HR. So, a relatively thin object and wide lens to help with DOF. It would be interesting to see some of your failures.

rearpano2.jpg
 

MGrayson

Subscriber and Workshop Member
Theoretically (that dirty word), the slanted wedge of stuff-in-focus would just rotate with the camera - just like the vertical slab in the untilted case. Sure, some things in focus in one image will not be in another, but that happens with any non-flat panorama (Exactly the "focus and recompose" problem.)

Of course, saying it *should* work doesn't imply much. The concrete example of @cunim above is worth more.

Matt
 

darr

Well-known member
Never done this, but would a sliding mount like this allow flat stitching for very little investment?

View attachment 221628
This setup is a great step toward more precise camera control, but for flat stitching, it’s missing one key feature: independent movement of the lens or sensor.

What you’re looking at here is a nodal rail (the sliding mount on top), mounted on an Acratech Panoramic Head—great gear, by the way (I have them on two of my tripods). While this helps align the lens’s entrance pupil over the pivot point, reducing parallax when rotating, it doesn’t offer the lateral shift you need to move the lens or sensor separately for perspective-correct flat stitching.

Flat stitching, as opposed to rotational panos, requires translating the lens or sensor parallel to the image plane, keeping perspective aligned across frames. This rig rotates the entire camera, which introduces parallax unless you’re photographing a scene where everything is far away (like a mountain range).

So yes, it’s great for rotational panoramas, especially when parallax is minimized. But if you're aiming for flat, perspective-locked stitches—like for architecture or high-resolution composites, you’ll need gear that can shift just the lens or sensor. Think: a rail system on a view camera, an ALPA or Cambo body, or a tilt-shift lens on a digital system.

As for the examples I posted above:
The first was flat-stitched using an ALPA STC, while the second was a rotational stitch with a Hasselblad 907x.
 

Pieter 12

Well-known member
This setup is a great step toward more precise camera control, but for flat stitching, it’s missing one key feature: independent movement of the lens or sensor.

What you’re looking at here is a nodal rail (the sliding mount on top), mounted on an Acratech Panoramic Head—great gear, by the way (I have them on two of my tripods). While this helps align the lens’s entrance pupil over the pivot point, reducing parallax when rotating, it doesn’t offer the lateral shift you need to move the lens or sensor separately for perspective-correct flat stitching.

Flat stitching, as opposed to rotational panos, requires translating the lens or sensor parallel to the image plane, keeping perspective aligned across frames. This rig rotates the entire camera, which introduces parallax unless you’re photographing a scene where everything is far away (like a mountain range).

So yes, it’s great for rotational panoramas, especially when parallax is minimized. But if you're aiming for flat, perspective-locked stitches—like for architecture or high-resolution composites, you’ll need gear that can shift just the lens or sensor. Think: a rail system on a view camera, an ALPA or Cambo body, or a tilt-shift lens on a digital system.

As for the examples I posted above:
The first was flat-stitched using an ALPA STC, while the second was a rotational stitch with a Hasselblad 907x.
I realized after I made the post that it would introduce multiple vanishing points, something that may or may not be noticeable of correctable. However, a tilt-shift lens shifted all the way in one direction, then the other or the camera rotated 180º around the sensor axis could produce a decent set of stitchable frames.
 

darr

Well-known member
I realized after I made the post that it would introduce multiple vanishing points, something that may or may not be noticeable of correctable. However, a tilt-shift lens shifted all the way in one direction, then the other or the camera rotated 180º around the sensor axis could produce a decent set of stitchable frames.
You're on the right track—yes, introducing multiple vanishing points is exactly the issue with rotating the whole camera instead of shifting the lens or sensor. Whether or not it's noticeable depends on the subject, but once you're working with lines, especially architecture or interiors, it can get messy fast.

Tilt-shift lenses absolutely can be used for flat stitching, when you shift left and right (or up and down), you’re effectively mimicking the kind of parallel movement you'd get from a technical camera. And because the sensor stays stationary, you avoid introducing new vanishing points. Stitching those images is usually seamless if everything's aligned and the subject is relatively flat.

Rotating 180º around the sensor axis, though, if you mean flipping the whole camera from one side to the other, can work in some cases, but that’s still a rotation, not a lateral translation. You’re likely to introduce subtle perspective shifts unless everything in the scene is at a uniform distance. So yes, it can work, especially with longer focal lengths and distant scenes, but it’s a bit of a gamble if you’re aiming for architectural precision.

Maybe Matt @MGrayson can add some diagrams to share here. It would be a great teaching aid for those exploring this thread. 😇
 

MGrayson

Subscriber and Workshop Member
You're on the right track—yes, introducing multiple vanishing points is exactly the issue with rotating the whole camera instead of shifting the lens or sensor. Whether or not it's noticeable depends on the subject, but once you're working with lines, especially architecture or interiors, it can get messy fast.

Tilt-shift lenses absolutely can be used for flat stitching, when you shift left and right (or up and down), you’re effectively mimicking the kind of parallel movement you'd get from a technical camera. And because the sensor stays stationary, you avoid introducing new vanishing points. Stitching those images is usually seamless if everything's aligned and the subject is relatively flat.

Rotating 180º around the sensor axis, though, if you mean flipping the whole camera from one side to the other, can work in some cases, but that’s still a rotation, not a lateral translation. You’re likely to introduce subtle perspective shifts unless everything in the scene is at a uniform distance. So yes, it can work, especially with longer focal lengths and distant scenes, but it’s a bit of a gamble if you’re aiming for architectural precision.

Maybe Matt @MGrayson can add some diagrams to share here. It would be a great teaching aid for those exploring this thread. 😇
Oooof. This is a hard one. I mean visually. Mathematically, it's simple, but connecting the math to the camera, lens, and final image is a mess. I'm going to start with the words and add diagrams later.

What's our goal? To create a single large image that our camera (sensor + lens) *can't* capture, and do this by combining a bunch of smaller images that it *can* capture. There are two cases here that differ HUGELY as to why our camera can't do this in a single capture:

Our lens's image circle is much larger than the sensor. A bigger sensor *could* give us the desired final image, but *our* sensor is too small.
or
Our lens's image circle just barely covers the sensor and our lens isn't wide-angle enough.

The first case is what view/tech cameras and shift lenses provide. Those lenses have large image circles - the entire image is there already behind the lens, but the poor sensor is just too small to grab it. On an 8x10 camera, a 200mm lens is wide angle, and its image circle has to cover at least 8x10 film. But a 200mm lens is a 200mm lens. A crop the size of your sensor from the center of that 8x10 image will look EXACTLY the same as what your camera would capture with a 200mm lens.

Solution? Keep the lens right where it is and move the sensor around "sampling" this larger image. The captures are flat rectangular crops of a single larger flat virtual image, so they can be combined easily with scotch tape or glue onto a large piece of paper. Hence the name "flat stitching". Note that the *lens* has to stay fixed and the *sensor* has to move around. If you fix the sensor and move the lens around, you're doing almost, but not exactly the same thing. The large virtual image changes *sightly* each time the lens moves. Foreground objects will shift relative to background objects. The lens makes the image. The sensor just samples it.

In the second (more typical) case our only option is to point the camera around like a flashlight "illuminating" the desired scene. (That metaphor is clear, right? The flashlight - with the same spread as our lens's FoV - just sends the light in the opposite direction, so what it illuminates is exactly what the camera captures.) There are two problems here. First, now that the lens *must* move, how do we guarantee that the images could, even theoretically, combine to make our desired final image (foreground can't move against background). And second, *how* in God's name do we combine them afterwards?

If everything is sufficiently far away we can ignore the first problem as there *is* no foreground. Otherwise, we have to be very careful to rotate the camera about - well - the point that the camera "sees" through. For a pinhole camera, it's the pinhole. For a very simple lens, it's the aperture. For a real-life modern lens, it's called the nodal point (or front nodal point?). It's where *you* see the aperture when you're looking into the lens from the front. Rotating around that point swings the aperture, but doesn't change the Point of View (it literally *is* the point of view). A nodal rail moves the lens back far enough so that it will rotate around this point when you rotate your tripod head. (For tilting up and down, this will be a problem unless you use special equipment, e.g., a gimbal mount - never mind that for now.)

Once we guarantee that all our sample images come from the same viewpoint, we have to stitch them together. Now we have the "changing vanishing point" problem. When you ask the computer to do this (and if the computer is in a good mood) you see weirdly distorted shapes that don't look at all like the rectangular images you thought you were seeing during capture. Remember our goal! We want to reconstruct the flat image that we would see if the lens's image circle were larger and we had a larger sensor. That flat image lies on a single plane. But now we're changing the plane of the sensor each time we rotate the camera. The weird shapes you see from the stitching programs are exactly what you get when you project a rectangle in one plane (the current sensor) onto a non-parallel second plane (the final image's plane). The shape is weird, but the content isn't. This is a piece of exactly what you want to see. It just might not be what you *thought* you were seeing - and the distortion affects resolution, sometimes drastically.

How does the computer do this? I'll save it for a later post - and after I've made some diagrams. Apologies, Darr.

Matt

Ok, I'll give away the secret. Flat stitching is easy because shifting is an "isometry" of the plane (literally "same-distance") - it moves things rigidly without changing their shape or size. So fitting the pieces together is a simple matter of alignment. Rotation about a point, however, is an isometry of a sphere centered around that point. Project all your images onto that sphere, then slide them around until they align (remember, the computer doesn't know which way you were aiming the camera, so it has to move things around to find out where they go in the final image). Sliding them around on the sphere does not change their size and shape and, more importantly, where two images overlap, they look identical - just like the flat stitch pieces did. Reread that sentence until it is part of your DNA*. So the computer perfectly aligns the spherical segments (they look like bulging rectangles) and then projects the resulting mess back onto the original plane. Tadaah! Stitched image.

(And if you're only doing a one-row stitch, you can project onto a cylinder, as rotation about a fixed axis is also an isometry (rigid motion) of the cylinder. Stitching programs will often offer this choice. Cylinders can be projected onto a plane, or unrolled. The latter is what a 360-degree panorama camera, like the Noblex, does. These days they're called spinners.)

* It is not true of the original pictures you took because of, e.g., converging lines when you change the camera angle. After projection onto the sphere, all those differences go away. Yes, this is a miracle.
 
Last edited:
Top