The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Fuji Sub-um technology will bring pixel-shift and 400mpx on GFX100

MGrayson

Subscriber and Workshop Member
I finally read the report. There are a number of big "if"s there. Like, "we have to control the sensor with 10 times the accuracy needed for IBIS". That does not sound like a done deal.

OTOH, Frame Averaging would be a simple processor task. I bet the difficulties THERE are all patent related.

Matt
 

Paul2660

Well-known member
From Fuji Rumors:

07FDEF66-FE99-450D-8FE5-6E637F72D819.jpeg


Fuji first mentioned this long before when the GFX100 was still on the development board, then when the camera was close to release, decided to bring the camera to market without this feature.

Glad to see it's back on the board, but per the video you mention and some other websites, there is no mention of when it will be made available.

I have no need for 400MP output, but would more welcome a feature like Phase One has on the 4150 frame averaging. If the pixel shift mode enhances DR that would be a benefit. I am sure it will require a tripod and subject movement will be limited. It will also take a very superb raw conversion. LR/ACR's conversion for the Pentax K1 was terrible and never improved upon (Abode's typical one and done approach). Since Phase One is now formally supporting the GFX cameras I hope that Capture One will support the pixel shift images (C1 never supported the Pentax K1 pixel shift). Pentax's Pixel shift was different in that it did not increase resolution 4x, but instead gave increased details and much better signal noise/DR, vastly cleaner images.

Personally I had hoped for a improvement in firmware for the AF on the camera as I find it's lacking in low light/low contrast situations, similar to the 50s and other Fuji X series cameras I have used.

Paul C
 

hcubell

Well-known member
Very interesting. If I understand well, it seems that it will become true with one of the future "kaizen" firmware updates.
That's what I like about Fujifilm, no need to buy a different and more expensive camera to get some great feature updates. At least for a while... :)

https://www.dpreview.com/news/7648446596/fujifilm-says-new-400mp-pixel-shift-mode-is-coming-to-its-gfx-100-camera-system
There are very significant differences in the way various manufacturers implement the pixel shift technology in terms of its actual utility in real world usage outside a studio and in workflow. This is discussed in detail at Diglloyd's website. The Panasonic S1R apparently merges the multiple files in cameras using its own firmware to create a single raw file and apparently deals much better with things like movement from wind and water. It also eliminates the need to rely upon a raw converter to assemble the images. In contrast, the Sony implementation is apparently poor, both in terms of workflow and output.
 

Paul2660

Well-known member
Hopefully Fuji does their homework and does not require a piece of software like Sony to combine the files to a single DNG. Instead process out in camera. However as this is an add on to existing camera processor I feel it will be done like Sony unfortunately.

It will be interesting to see how they do it.

Would be nice to also add a frame averaging feature like P1.

Paul C
 

mristuccia

Well-known member
Personally, and not being directly involved (I don't own a GFX100), I would give priority to pixel-shift rather than FA.
FA is easily achievable in post (just do a bunch of shots and merge in PS). Pixel-shift can't be easily done manually. Super-Res techniques are based on random framing variations and statistics, not that precise and scientifically exact as pixel-shift is. Moreover, with pixel-shift one can sample all 3 base colours for each pixel and skip the demosaicing algorithm and its artefacts.
 
Last edited:

Greg Haag

Well-known member
I do not own the GFX, but I do own the S1R, it does an excellent job with the in camera pixel-shift. On my Phase One back if you gave me the option of getting pixel shift or taking away frame averaging, I would want to keep frame averaging, but probably depends on what needs you are trying to meet.
 

MGrayson

Subscriber and Workshop Member
FA is easily achievable in post (just do a bunch of shots and merge in PS).
That is true in theory. In practice, though: GFX100 files are 130MB, compressed. Suppose you want to do a 10 minute FA exposure with 1 second per frame. That's 600 frames, or 78 GB. And try loading that in PS. (Yes, you could average ten at a time and iterate, but it would be nice not to worry about the storage.) Even if you do 10 frames per shot, you end up using your storage ten times as fast. I'd rather it were done in-camera.

The exception is astrophotography, where "lucky imaging" is a wonderful technique, and the frames to be averaged are selected in post.

Matt
 

mristuccia

Well-known member
That is true in theory. In practice, though: GFX100 files are 130MB, compressed. Suppose you want to do a 10 minute FA exposure with 1 second per frame. That's 600 frames, or 78 GB. And try loading that in PS. (Yes, you could average ten at a time and iterate, but it would be nice not to worry about the storage.) Even if you do 10 frames per shot, you end up using your storage ten times as fast. I'd rather it were done in-camera.

The exception is astrophotography, where "lucky imaging" is a wonderful technique, and the frames to be averaged are selected in post.

Matt
True that! I would in this case do an iteration-macro to average one frame after the other. I think that the firmware in the back does something similar, it certainly does not save all the images before averaging, otherwise the storage space problem would be there as well.
Still pixel-shift has no chance to be done at all in post-production.

But of course it is a matter of personal priorities, mostly depending on the specific photography job each one of us does.
Personally, I really hate thinking that 2/3 of the color data of all our images are the result of an interpolation-guess. :facesmack:
 

Shashin

Well-known member
Well, this only doubles the resolution--it quadruples the data, which is a different thing. It is also one of the trade-offs of having more and more pixels, the file size increases at a faster rate than the increase in resolution.

Still, pixel shift is great technology and if you have IBIS then you get to offer some neat technology through firmware. It would also be nice if they could add the Pentax star tracking functionality and true color technology that does not need Bayer interpretation. But to be honest, when you have a 100MP sensor, what are you really adding to the final output (as opposed to the 100% monitor view).
 

MGrayson

Subscriber and Workshop Member
G'day Shashin, could you help me understand this please? Moving from 100-400MP but only doubling the resolution?
It's a definition: resolution is how close together two points can be and still be distinguishable as two points. There's a technical criterion, but the point is that it's a distance, e.g., 3 microns, or inverse-distance, say, 40 lines/mm.

Megapixels fill an area, so increasing the resolution from 40 lines/mm to 80 lines/mm requires FOUR times as many pixels. They're more tightly packed both horizontally and vertically.

Going from 100MP to 400MP on the same sensor means doubling the number of rows and columns, and so doubling the resolution.

More recently, I've seen resolution put in terms of MP, as in, "the eye can resolve 576 MP". That is just a different definition of resolution, and it can get confusing which one is meant. It's like saying a cell phone has a 28mm lens. It's really a 4mm lens, but it's translated to FF "coordinates".

--Matt
 

Pelorus

Member
Thanks Matt and Will, I should engage my brain before my typing fingers. A very eloquent explanation Matt.


It's a definition: resolution is how close together two points can be and still be distinguishable as two points. There's a technical criterion, but the point is that it's a distance, e.g., 3 microns, or inverse-distance, say, 40 lines/mm.

Megapixels fill an area, so increasing the resolution from 40 lines/mm to 80 lines/mm requires FOUR times as many pixels. They're more tightly packed both horizontally and vertically.

Going from 100MP to 400MP on the same sensor means doubling the number of rows and columns, and so doubling the resolution.

More recently, I've seen resolution put in terms of MP, as in, "the eye can resolve 576 MP". That is just a different definition of resolution, and it can get confusing which one is meant. It's like saying a cell phone has a 28mm lens. It's really a 4mm lens, but it's translated to FF "coordinates".

--Matt
 

gerald.d

Well-known member
Last time I checked, photographs are two dimensional, not one.

Every camera manufacturer on the planet defines resolution as the number of pixels on the sensor, not the linear resolution or gap between said pixels.

I don't think it's necessary to provide hundreds of links to prove this.

100-400 is quadruple - not double - the resolution, in the parlance of 99.999% of the camera buying public.

Yes, words have multiple meanings. But to try to claim that when almost every single person talks about "resolution" they are not counting pixels, is a little silly.
 

gerald.d

Well-known member
I do not own the GFX, but I do own the S1R, it does an excellent job with the in camera pixel-shift. On my Phase One back if you gave me the option of getting pixel shift or taking away frame averaging, I would want to keep frame averaging, but probably depends on what needs you are trying to meet.
Greg - are you sure your S1R does a good job?

I get bad "combing" (not sure what else to call it) anywhere where there is high contrast - for example, black text on a white background.

I've actually given up using the pixel-shift on the S1R because of it.

Kind regards,


Gerald.
 

pegelli

Well-known member
I don't think it makes a lot of sense to express resolution of sensors as the number of pixels and the resolution of lenses in lines per millimeter :facesmack:, but indeed words can have different meanings to different persons, so to each his own.
 

Greg Haag

Well-known member
Greg - are you sure your S1R does a good job?

I get bad "combing" (not sure what else to call it) anywhere where there is high contrast - for example, black text on a white background.

I've actually given up using the pixel-shift on the S1R because of it.

Kind regards,


Gerald.
Gerald,
I have been very pleased with the S1R in high res mode. I cannot speak to the combing issue as I have not noticed this, however, if it occurs in situations with text I have never used it where text has been in the image. Maybe this would show in larger prints, I have never printed an image from the S1R larger than 24x36.
Thanks,
Greg
 

Shashin

Well-known member
Last time I checked, photographs are two dimensional, not one.

Every camera manufacturer on the planet defines resolution as the number of pixels on the sensor, not the linear resolution or gap between said pixels.

I don't think it's necessary to provide hundreds of links to prove this.

100-400 is quadruple - not double - the resolution, in the parlance of 99.999% of the camera buying public.

Yes, words have multiple meanings. But to try to claim that when almost every single person talks about "resolution" they are not counting pixels, is a little silly.
Actually, for Japanese manufacturers, they do not by an industry standard use the number of pixels to define resolution. I was actually working as a technical writer for a camera company when the standards were set for describing the technical specifications of a digital camera.

Because the "buying" public believes resolution equals pixel number does not make it right. Most photographers think Bokeh means narrow depth of field, which is simply wrong. I don't believe that a technical field and discipline is best run by popular vote. Naturally, I don't assume when people say "resolution" they are actually referring to the technical definition. Just like I don't assume when people use the word Bokeh they are referring to the quality of the out-of-focus area of the image. But likewise, I don't mind the conversation because I think it is good to understand your discipline.

Now, you can simply define things in a personal way and be happy with that. However, you might find that the technical definitions have some real merit and practical outcomes. If you actually have to meet a resolution requirement from a photographic system and you think pixels equals resolving power, then you will come up short.

I guess I don't understand the downside of knowing what terms actually mean...
 

dougpeterson

Workshop Member
FA is easily achievable in post (just do a bunch of shots and merge in PS).
This is much easier said before you have used the Automated in-camera Frame Averaging of the IQ4.

Once you've done it in-camera, with a single click, resulting in a "normal" raw file you can treat like any other (that just happens to be unbelievably clean in the shadows and allows long exposure without an ND filter), it's hard to view the capture-a-bunch-of-raws-convert-to-tiffs-manually-keep-track-of-which-ones-belong-to-which-stack-then-load-into-huge-photoshop-document-and-average method as "easily achievable".

It's like learning to do calculus or standard deviations by long-hand vs pushing a button and having the answer. Maybe some will enjoy the extra work. But for most people they just want the end result, and in-camera Automated Frame Averaging is just a way faster, easier, less distracting way of getting there.
 

dougpeterson

Workshop Member
Moreover, with pixel-shift one can sample all 3 base colours for each pixel and skip the demosaicing algorithm and its artefacts.
And introduce new artifacts unique to pixel shifting :).

(Bias alert: My company (Digital Transitions) chooses to sell cameras (Phase One) that do not do pixel shifting and chooses not to sell cameras (e.g. Sinar, Hasselblad, Fuji) that do. So I'm obviously biased. But I also have quite a lot of experience working with clients to evaluate against these options. I've posted more of what I've learned from that experience here.
 
Top