The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Previewing stitched images in location. Options?

Hi,
I am looking to get an idea of what my stitched images will look like on location, in particular, I would like to play around with the Brenizer method and shoot some full body portraits this way.

Outside of lugging around my Macbook Pro, am just wondering what my options are? I have Autostitch on my Iphone but these results keep cutting off sections and I do not find it reliable enough.

Do the Ipad versions of Capture One, Lightroom or Photoshop enable this?

Thanks in advance! :)
 

dchew

Well-known member
There are two aspects of this to address. First the preview of what you want to capture. I think the simplest tool for that would be the Viewfinder App / Alpa eFinder App, which allows you to make custom-sized sensor / lens combinations. You could figure out what your stitched sensor size will be, load that as a custom camera format and add whatever telephoto lens you will use. This gives you a preview of the image for setup purposes.

Second is a reasonably quick result for review in the field after you've captured the images. You could probably do this in the iPad version of Capture One. I've used Capture One's stitching capabilities with mixed results, so I'm not sure if that will satisfy your needs. I don't use the iPad version of Capture One so others will have to chime in on what that solution would be like trying to stitch a bunch of images together while in the field on the iPad, and if the stitching function is even available. I am pretty sure LR Mobile will not stitch images. You can do basic edits but I do not think the Photomerge Tool is available in the LR app on your iPhone/iPad.

So for setup / planning purposes, I think the eFinder would work fine. But for previewing the actual result with your captured images, I suspect you will be stuck with using a computer in the field. @jng does this all the time with an ancient MacBook Air, so not THAT much more involved than an iPad Pro in the field.

Dave
 
Hi Dave,
Thanks so much for the reply, much appreciated! Did not think about the Viewfinder App, I did used to have it years ago but that was on a different phone, so would have to purchase again.

It is more the review that I am concerned about. My understanding is that none of the Ipad versions (Photoshop, Lightroom and C1) allow stitching, obviously very happy for somebody to tell me otherwise.

I did come across Affinity Photo 2 for the Ipad and this does have a stitching/Pano feature, so will download the free trail of that and see if it works for my purposes.

Thanks again, look forward to hearing from anybody else that has a different solution.


:)
 

buildbot

Well-known member
Thanks so much for the reply, much appreciated! Did not think about the Viewfinder App, I did used to have it years ago but that was on a different phone, so would have to purchase again.
Unless you changed Google/Apple accounts, I believe both android and iOS will let you redownload purchased apps. I can see a list of apps I purchased going back to 2010!
 

Godfrey

Well-known member
You made me curious ... I went on my iPad Pro 11" and downloaded the PhotoStitcher app. I made four exposures in my office using the Moment Pro Camera app and HEIF format output, then used it to stitch them together. The result is a 9007x3239 pixel image, here's a downsized version:
original-IMG_5775.jpeg

I then 'cleaned it up' in SnapSeed and put a faux film border on it:

cleaned-IMG_5775.jpeg

Not bad results from a bit of free software! :)

G

Oh yes: Please pardon the messy office... :)
 
You made me curious ... I went on my iPad Pro 11" and downloaded the PhotoStitcher app. I made four exposures in my office using the Moment Pro Camera app and HEIF format output, then used it to stitch them together. The result is a 9007x3239 pixel image, here's a downsized version:
View attachment 204848

I then 'cleaned it up' in SnapSeed and put a faux film border on it:

View attachment 204849

Not bad results from a bit of free software! :)

G

Oh yes: Please pardon the messy office... :)
Good One! Could you possibly take a screenshot of which one? Multiple ones come up for me...

Does it allow you to import images? Or only those taken on your phone at the time?

I downloaded the trial of Affinity Photo 2 and so far it works great! :)

Would love a phone version though...

Thanks! :)
 

Godfrey

Well-known member
This is the one I downloaded:

It will drive the camera but it relatively unsophisticated in that. I used the Moment Pro Camera app to create HEIC exposures for stitching. I believe that both HEIC and JPEG originals work for stitching with this app, and it will stitch up to four images.

Here's another example: I used Pro Camera on my iPhone 11 Pro with the ultrawide camera chosen and made three exposures sitting at my desk, output to HEIF. I added those to PhotoStitcher ...
original-IMG_9123.jpeg

Cropped it right in PhotoSticher to clean up the borders a little:

cropped-IMG_9124.jpeg

That's pretty good for a free app! :D

G
 

spassig

Member
@Godfrey
…I went on my iPad Pro 11" and downloaded the PhotoStitcher app…
Is this app available only for iPad Pro 11?
I doesn’t find it in App Store on my iPad Air (3. Generation) yet.

Jochen
 

Godfrey

Well-known member
No, I've used the App Store to find it on my gen 1 iPad Pro 12.5", gen 1 iPad Pro 11", and iPhone 11 Pro. Runs on all of them just fine.
Search in the App Store for "photo stitcher" not "PhotoStitcher". It's probably just a name irregularity that you're running into, and the Search function is a little goofy. It usually shows up in the listing a few rows down. ;)

Here's a screen shot from my iPad Pro 12.5 version of the App Store:

photostitcher_screen_shot.jpg
G
 
Am revisiting this post after going back to stitching images with my Fuji GFX 100S.

Am wondering if there is anyway of figuring out how many images to stitch (and in what orientation) to get a desired output and equivalent focal length?

To complicate things even further, I would like to use the "click stops" available to me on the rotator heads I have. I am looking at the 5, 6, 7.5, 10 and 12 degree stops on both the vertical and horizontal axes.

I did see this one that looks very good but only provides the final out put in a 3:2 aspect ratio.


Ideally, I am after my output in a 5:4 ratio.

Any ideas?

The only other way i can think of is to just do it by trial and error and take not of the results.. Definitely possible but sam wondering if there is an easier, less time consuming way.

Thanks! :)
 
There are two aspects of this to address. First the preview of what you want to capture. I think the simplest tool for that would be the Viewfinder App / Alpa eFinder App, which allows you to make custom-sized sensor / lens combinations. You could figure out what your stitched sensor size will be, load that as a custom camera format and add whatever telephoto lens you will use. This gives you a preview of the image for setup purposes.

Second is a reasonably quick result for review in the field after you've captured the images. You could probably do this in the iPad version of Capture One. I've used Capture One's stitching capabilities with mixed results, so I'm not sure if that will satisfy your needs. I don't use the iPad version of Capture One so others will have to chime in on what that solution would be like trying to stitch a bunch of images together while in the field on the iPad, and if the stitching function is even available. I am pretty sure LR Mobile will not stitch images. You can do basic edits but I do not think the Photomerge Tool is available in the LR app on your iPhone/iPad.

So for setup / planning purposes, I think the eFinder would work fine. But for previewing the actual result with your captured images, I suspect you will be stuck with using a computer in the field. @jng does this all the time with an ancient MacBook Air, so not THAT much more involved than an iPad Pro in the field.

Dave
Thanks Dave, I am just revisiting this but something seems a bit off so far. When you say to "figure out what your stitched sensor size will be", where am I getting that information from? Sorry, feel a bit silly for asking but cannot quite see that one..

I did have another query, I have an Iphone 11 Pro that has an Ultrawide camera (0.5x) but when I look up lenses for my GFX, lenses 20mm, 23mm 30mm and 32mm all have "wide required" next to them.

When I first purchased this app, I think I had an Iphone 5 and purchased the Alpa 0.5 wide angle converter lens, a screw on attachment that is placed over the Iphone 5's standard lens.

Surely that is not required? What am I missing here?

Thanks :)
 

Attachments

Thanks Dave, I am just revisiting this but something seems a bit off so far. When you say to "figure out what your stitched sensor size will be", where am I getting that information from? Sorry, feel a bit silly for asking but cannot quite see that one..

I did have another query, I have an Iphone 11 Pro that has an Ultrawide camera (0.5x) but when I look up lenses for my GFX, lenses 20mm, 23mm 30mm and 32mm all have "wide required" next to them.

When I first purchased this app, I think I had an Iphone 5 and purchased the Alpa 0.5 wide angle converter lens, a screw on attachment that is placed over the Iphone 5's standard lens.

Surely that is not required? What am I missing here?

Thanks :)
Hmmm, just realised that I think the wide angle is working on my Iphone. I just saw that it said "wide angle required" and assumed it would not work. Duh.. :)

With regards to figuring out a "virtual sensor size", can I just use the pixels in my final "image size" in Photoshop? Am a bit confused as this is a stitch with a Panoramic head, so am not quite sure how the stitching software figures it all out.

Thanks!
 

MGrayson

Subscriber and Workshop Member
Unless you are using flat stitching, there is no "sensor size". You are combining images under different projective (or in the case of cylindrical and spherical, nonlinear) transformations. In the extreme case of a panorama covering 180˚, the necessary flat-stitched sensor width would be infinite. In any event, there wouldn't be a well-defined map from captured pixels to final panorama pixels.

tl;dr - trial and error, AKA capture more image than you think you will need, seems the best alternative. Bear in mind that the projected stitched data will have a bowtie shape, with the least coverage being in the center, so make sure you get enough image in the middle. If you're going for a cylindrical or spherical projection, then I've no idea.

As to how stitching software figures it out: this is a bit tricky.

Unless the images come from a flat stitch, the software doesn't know which direction the camera was pointed for each capture and which point you want in the center of the final image. It gets around this problem by working internally with a spherical projection. Why? Because projection onto a sphere looks the same in any direction. What the software DOES need are the angles of the capture, determined by the sensor size and focal length of the lens. The shape of the projected image onto the sphere depends critically on these. A small angle from a long lens makes a close-to-rectangular projection. A wide angle lens makes a very bulged projection. The edges of the projections are exactly the "great circle routes" taken by long distance flights. (Modern computers being fast, the software can sometimes figure this out on the fly during the match-up-on-overlaps stage, coming up next...)

Once the projected pieces are collected, all the software has to do is figure out how best to shuffle them around on the sphere to make a stitched image. This is "easily" done by looking at correlations of the images in all possible overlap regions. The "easily" is in quotes, as unless the lens correction is perfect and the lens was perfectly nodally aligned and there was no movement between shots, the images won't actually line up perfectly in the overlaps. Manual overlap programs let you pick matching points by hand. Computers have gotten better at it, but they still don't always get it perfect and, for example, verticals might not align perfectly in different parts of the final image.

After all that, the sphere has to be mapped back to a flat image on your screen, and that's another transformation without a well-defined pixel count. The software makes a default choice and lets you scale, make vertical and horizontal projection adjustments (align verticals and/or horizontals) and crop as desired. Again, in a flat stitch, the choices can all default to translations, so pixel count has a well-defined value. But that isn't true for any other kind of stitch.

Matt

A note on great circle routes: You may think "I know that the east/west routes seem to bulge in the middle, but what about north/south routes? They travel along longitude lines and look straight." But that's only because we think of looking at them from directly above. Look at, say, the center of the US. The longitudes in the Pacific and Atlantic will seem to bulge outwards in the middle because you're looking at them from the sides. It's the same as with the routes from NY to Tokyo that pass through Alaska. Viewed from space over Alaska, these routes seem straight. (Apologies for the US-centric geography)
 
Last edited:
Unless you are using flat stitching, there is no "sensor size". You are combining images under different projective (or in the case of cylindrical and spherical, nonlinear) transformations. In the extreme case of a panorama covering 180˚, the necessary flat-stitched sensor width would be infinite. In any event, there wouldn't be a well-defined map from captured pixels to final panorama pixels.

tl;dr - trial and error, AKA capture more image than you think you will need, seems the best alternative. Bear in mind that the projected stitched data will have a bowtie shape, with the least coverage being in the center, so make sure you get enough image in the middle. If you're going for a cylindrical or spherical projection, then I've no idea.

As to how stitching software figures it out: this is a bit tricky.

Unless the images come from a flat stitch, the software doesn't know which direction the camera was pointed for each capture and which point you want in the center of the final image. It gets around this problem by working internally with a spherical projection. Why? Because projection onto a sphere looks the same in any direction. What the software DOES need are the angles of the capture, determined by the sensor size and focal length of the lens. The shape of the projected image onto the sphere depends critically on these. A small angle from a long lens makes a close-to-rectangular projection. A wide angle lens makes a very bulged projection. The edges of the projections are exactly the "great circle routes" taken by long distance flights.

Once the projected pieces are collected, all the software has to do is figure out how best to shuffle them around on the sphere to make a stitched image. This is "easily" done by looking at correlations of the images in all possible overlap regions. The "easily" is in quotes, as unless the lens correction is perfect and the lens was perfectly nodally aligned and there was no movement between shots, the images won't actually line up perfectly in the overlaps. Manual overlap programs let you pick matching points by hand. Computers have gotten better at it, but they still don't always get it perfect and, for example, verticals might not align perfectly in different parts of the final image.

After all that, the sphere has to be mapped back to a flat image on your screen, and that's another transformation without a well-defined pixel count. The software makes a default choice and lets you scale, make vertical and horizontal projection adjustments (align verticals and/or horizontals) and crop as desired. Again, in a flat stitch, the choices can all default to translations, so pixel count has a well-defined value. But that isn't true for any other kind of stitch.

Matt

A note on great circle routes: You may think "I know that the east/west routes seem to bulge in the middle, but what about north/south routes? They travel along a longitude line and look straight." But that's only because we think of looking at them from directly above. Look at, say, the center of the US. The longitudes in the Pacific and Atlantic will seem to bulge outwards in the middle because you're looking at them from the sides. It's the same as with the routes from NY to Tokyo that pass through Alaska. Viewed from space over Alaska, these routes seem straight. (Apologies for the US-centric geography)
Hi Matt,
Thanks so much for the amazingly in-depth response! :)

Yes, this is sorta what I thought but was not sure. Might have to do a visual estimate then, set the "Viewfinder App" slightly longer than my stitch will be and then crop later.

Thanks again, much appreciated!
 
Top