The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

IQ180 vs Up-Res A7R2

daf

Member
The raw file that I downloaded from post #37 has to be erroneous. I wish that poster could provide some more insight.

Victor


Victor, as i said in the previous post take a look at the image posted by chrismuc, i mean take your time to search for horizonline, then you'll understand that this shot is shiffted !!

(i think: Top image vigneting is from image circle, bottom vigneing is from barrel/mechanismvigneting, because of the shift)

This shot is probably shiffted by 8mm ...
 

vjbelle

Well-known member
What you have observed does make sense. Maybe that is why Gerald came to the conclusion that the image was shifted...... As I thought about it I failed to take into account the barrel/mechanism vignetting and was looking for vignetting only on one side..... not uniform vignetting. Maybe Gerald could chime in or could show what happens when the lens is shifted 6-8mm. Its obvious that when on the front of an IQ180 there is no room for shifting without vignetting.

Victor
 

vjbelle

Well-known member
This is more of what I expected to see if the 17mm TSE were shifted. The lens with the smallest image circle that I own is my 35XL. I shifted 18mm on my STC to show the vignetting.... it is only on one side...... I didn't expect to see an equalized vignette as was shown by the file in post #37.

Victor
 

Attachments

chrismuc

Member
Victor
You are talking about my IQ180 file. I said from the very beginning that it is with TSE17 shifted 12mm upwards.

Actually with some knowledge of perspective one can see it from the picture.

IQ180+TSE17-12mm-shift.jpg

In an unshifted picture, with horizontal camera orientation (this means vertical lines in architecture are vertical, not tilted), always half of the picture must be below, half above a virtual horizontal line given by the elevation position of the camera above ground. I added this virtual line in red to the picture you are refering in your post, and as you see, that line is much below the center of the picture recorded by the sensor. The reason is that the lens was fully shifted upwards.


The vignetting on the top side is due to the image circle of the TSE 17 (about 67mm) which is insufficient for the shifted image.

The vignetting on the bottom side does NOT come from the image circle restriction but because the back barrel of the lens respectively the diameter of the Canon EF mount is too small (the EF mount never was meant to be used with medium format sensors plus shift).

This fact was discussed previously in another thread and every other user (like gerald) can confirm it.

Christoph
 

torger

Active member
I'd love to make more contributions with demonstrations and further dig into how C1 does its colors, but I don't really have time with that. The only reason I've dug into it this far is so that my profiling project can produce profiles for it. It's time consuming to do this kind of stuff, and now I'd rather put that into improving my own color profiling engine.

Anyway here's a short one on the color correctness of Capture One. I don't have an IQ180 shot of a subject of to me known color, such as a color checker, so I can't compare for IQ180 specifically. But for demonstration, here's an example from a P45+:

First Capture One's default profile result:

p45-c1.jpg

Then a calibrated result:

p45-calib.jpg

Both with "standard film curve" applied, the calibrated profile is generated by an in-development DCamProf using it's "neutral tone reproduction operator" whose purpose is to maintain color appearance over changed contrast (increased in this case). Feel free to compare to your own color checker.

My own observations of C1: dark blue is much too bright, the left orange is more yellow than the actual orange, overall saturation is a little bit too low (with exception in the foliage range for example). Those are the larger errors, there are smaller too.

The DCamProf result may be a tad bit high on saturation, but overall the color appearance match is a ton better although not 100% perfect. Doing a side by side check with a calibrated monitor and a real color checker under sane light should make this obvious to anyone.

Now there are reasons why C1 has made the colors like that. That dark blues are too bright is probably because they have reduced or even skipped lightness corrections, as that can in some circumstances negatively affect gradients, and lightness errors while easy to detect in a side-by-side comparison they're least objectionable and hard to detect if you can't do a direct comparison with the original.

While lightness errors is probably largely due a side effect of hardware limitations, the hue errors are most likely designed. I've noted that skin tones are often made less yellow than reality for example probably because many think that's more flattering. More saturation in the foliage range may be a landscape adjustment. Why they have brought orange and yellow-green so tightly together I don't really know. It could be a side effect - sometimes you increase color separation in some color range, but that means that you must decrease it in some other.

Note that I do not believe that the most accurate color means the "best" color, but I do believe that maybe it's not to the photographer's best interest that the manufacturer designs looks for them. Maybe the photographer should herself/himself be more in control.

During my work with camera profiling I have noted how easy it is to "fool" oneself and think, "wow - this looks really accurate", for example if you shoot a subject in one room and look at a screen in the next. When you bring up the possibility with side-by-side comparisons you note that the bundled profiles are not very accurate at all. This does say though that color don't really need to be accurate for most applications. It has made me very skeptical about various claims of "accurate" color out of the box though. We see things we want to see, and the eye/brain is very forgiving and good at filling in.

The P45+ is an old sensor which has less overlap of color filters than a more recent like the IQ180. This means that it's easier to make saturated colors with less noise (good as the sensor is a bit noisy too), but harder to separate colors. With increased signal-to-noise ratio color filters can now be more overlapping allowing finer color separation capabilities, at the cost of noise increase when making saturated colors (which isn't as harmful as before due to the better S/N ratio). Most differences you see are designed in the profile though.

I'd love to see a similar comparison made with an IQ180, I think you would get a similar result, that is a perfectly alright realistic look but with a few quite obvious color errors that for various reasons have been designed into the profile. So forgive me, before I see such a demonstration I cannot really trust anyone's word that the bundled profiles provide accurate color.
 
Last edited:

Shashin

Well-known member
Film curves will not result in accurate color. Not a great default to wok with.

Posting a color image on the web is not a great way to have a discussion about color accuracy. Besides, how are you shooting this target? What are your viewing conditions? How are you measuring color?
 

torger

Active member
Film curves will not result in accurate color. Not a great default to wok with.

Posting a color image on the web is not a great way to have a discussion about color accuracy. Besides, how are you shooting this target? What are your viewing conditions? How are you measuring color?
I assume the interested persons have calibrated screens and color managed web browsers that can display sRGB images in the intended color space. I do. The color errors are large enough to detect with uncalibrated screens too, if you have a wide gamut unmanaged screen the colors will be over-saturated, but that for example the orange and yellow-green are really close together should be quite easy to see anyway.

I chose film curve (or more specifically a contrast S-curve) because very few use linear rendering for general purpose photography, which in C1's case is not linear in any case as the profile LUT itself applies a small curve. If we were discussing reproduction work we would discuss C1's "Cultural Heritage Edition", whose accuracy should be good according to what I have heard (they have other profiles).

Photographer's interest in accurate colors doesn't end because of the curve though. I generally try to use the term "color appearance", as accuracy makes people confused and they start think reproduction work, so sorry for using the forbidden word. Color appearance modeling is about taking that colorimetric color and transform it for a specific scene, reproduction device and viewing condition and make it appear the same as the original. The tone curve is actually a primitive color appearance model. Due to the Stevens and Hunt and other psychovisual effects a linear rendering of say a sunny outdoor scene will appear flat when displayed with a linear curve, but with an S-curve the original appearance in terms of contrast can be restored. The challenge of color appearance remain though.

Color science haven't done that much modern research on tone curves, but instead went straight to more complex scene/image appearance models which we as photographers would call tone mapping, Fairchild and Johnsons iCAM was an early attempt. If you want to retain color appearance on a contrast curve there's thus not much established research to rely on so you have to come up with your own model, and yes it will be filled with compromises as you can't take local contrast or other spatial properties into account. It can however be made to work decently well in a broad range of cases, I've developed one for DCamProf with the help of my own and other's expert eyes, and I'm sure Phase One have their own in-house model. A basic starting point is the the more contrast you apply, the more you must increase the saturation, but the model is more complex than that.

Anyway, all color science is based on psychovisual experiments, mainly color matching experiments. Watch one color, compare to another, say if it matches or not. So just do it yourself in this case. Hold a color checker in one hand under a print viewing lamp, or window daylight, and compare to what you see on screen, like you would do with a print.

The accuracy of color will be subjective, yes of course, like the whole area of color science is. I strongly doubt though that anyone would disagree that C1's dark blue is too bright, or that the yellow-green and orange are too close. The other errors are smaller and may need a more careful setup to thoroughly evaluate. As we have a contrast curve you should let the eye adapt to the overall contrast too and watch the image as a whole, that is not mask away so you see a single color patch (which then will appear too saturated).

The purpose of the demonstration is to show that C1 doesn't render color with the original color appearance, as it's not its intention. If you think the example can't prove that even for the P45+ I can't really provide any further arguments to convince you on that point, and that's fair enough. I at least tried.
 

Quentin_Bargate

Well-known member
This sort of stuff only interests pixel peepers. For practical purposes, including demanding fine art applications, the difference below very large wall sized prints is irrelevant.

Moreover, the practical reality is useabilty in the field. I'm no longer prepared to lugg The back breaking weight of MF cameras around the place.

A few years ago we marvelled at 39mp P45 CCD backs. If we can achieve similar performance from a Sony A7RII but with far less weight, and far better high ISO, then so far as I am concerned, its time to kiss MF goodbye.
 

Shashin

Well-known member
The purpose of the demonstration is to show that C1 doesn't render color with the original color appearance, as it's not its intention. If you think the example can't prove that even for the P45+ I can't really provide any further arguments to convince you on that point, and that's fair enough. I at least tried.
Well, it is a film rendering--I have no doubt you are not getting accurate color and to blame C1 for bad color with a film profile is a bit like blaming instagram filters for not getting accurate color as well. But you are not even showing you could even reproduce accurate color, so I am not sure the point except to say you should profile your camera and shooting conditions. And stop using a film profile.

I am not sure why you are saying color scientists area not working on tone curves? Unless you control contrast you will never get accurate color. Which is why the film profile works against you. There are decades of work in tone curves and color, which is why there was no real contrast control for color film images. Sure, you had control with contrast masking or with dye transfer printing, but it was limited. But basically if you change the tone curve, you change the color and the relationships between colors. You cannot maintain accurate color and change contrast--linear response is great for color.

Color is the human response to light. So it has to be visual. But that does not mean you cannot measure it. And even with visual inspection, the viewing conditions are very specific--not just the color of the light (include a high CRI), but its intensity as well. And window daylight is a horrible condition to view under if you are looking for color as it is too variable. You need to be very specific with viewing conditions and shooting conditions to do this work. Leaving solely to the visual system to provide the answers will not work very well.
 

torger

Active member
Well, it is a film rendering--I have no doubt you are not getting accurate color and to blame C1 for bad color with a film profile is a bit like blaming instagram filters for not getting accurate color as well. But you are not even showing you could even reproduce accurate color, so I am not sure the point except to say you should profile your camera and shooting conditions. And stop using a film profile.

I am not sure why you are saying color scientists area not working on tone curves? Unless you control contrast you will never get accurate color. Which is why the film profile works against you. There are decades of work in tone curves and color, which is why there was no real contrast control for color film images. Sure, you had control with contrast masking or with dye transfer printing, but it was limited. But basically if you change the tone curve, you change the color and the relationships between colors. You cannot maintain accurate color and change contrast--linear response is great for color.

Color is the human response to light. So it has to be visual. But that does not mean you cannot measure it. And even with visual inspection, the viewing conditions are very specific--not just the color of the light (include a high CRI), but its intensity as well. And window daylight is a horrible condition to view under if you are looking for color as it is too variable. You need to be very specific with viewing conditions and shooting conditions to do this work. Leaving solely to the visual system to provide the answers will not work very well.
One need to be able to differ between small errors and large errors. If we can't do that, we can't really have this type of discussion as we'll talk past each-other. An orange that becomes yellow is a large error. A slight change in saturation is a small error. If you think a tone curve must undoubtedly lead to color errors à la instagram effect filters then I think it will be difficult to discuss color in any meaningful way. I surely know about the limits of color constancy and chromatic adaptation of human vision, I know it well enough to have a feel for what is a large error and what is a small, and what will require a stable well-defined viewing condition, and what can do with less.

The reason I'm saying color scientists don't work with tone curves, or tone reproduction operators as the scientific term is, is because most work is dated back prior to digital photography. It's not all true though, looking at this paper from 2002, "K. Devlin, A review of tone reproduction techniques" you get a nice overview of some of the techniques, both spatial and non-spatial. It's 13 years old though, and after that what I find is "tone mapping", ie spatially varying tone reproduction operators. The algorithms we see in raw converters are in-house proprietary, not standardized public algorithms. CIE has no recommended tone reproduction operator. I think most work regarding tone curves is actually done in video, not stills. There are many raw converters today and most have different methods to apply tone curves, as there is no standard method to just pick. Adobe uses their RGB-HSV hue-stabilized tone curve. Then on top of the tone curve raw converters usually apply subjective adjustments. Which I see that C1 have done and I tried to demonstrate, but apparently it's not working that well.

Concerning measuring color. What we measure when we use spectrometers and get tristimulus values is the spectrum which is integrated with the "standard observer", which is based on psychovisual color matching experiments made in the 1930s, if I remember correctly on 15 trained individuals or so. The standard observer has been reviewed several times since then, but the 1931 standard observer still stay the standard as CIE has as a principle to not change standard if there is only a small improvement. That is what we measure is how well something matches a psychovisual experiment. Chromatic adaptation transforms are the same, mathematical models that have been generated to fit psychovisual experimental data. We always end up in human eyes and judgments, so far there's been no measurement probes put into the eye or brain to measure an actual signal.

Today we know that the pure split between colorimetric tristimulus value ("eye signal") and "brain processing" is not really correct, stuff happens in the eye too that changes what signals that are sent to the brain. However the field of color science is a pragmatic one and this split has been maintained and instead models have been adapted to compensate. This is quite clear concerning dark adaptation.

To evaluate the result of a tone reproduction operator we can't rely on any experimental data or any standardized model derived from that, because it haven't been made. We do have models for various effects, which most ended up in the CIECAM02 model (extensively used in DCamProf by the way), but it still does not have adequate tools to evaluate tone reproduction operators. For that the most effective way is visual A/B comparison with the linear colorimetric rendering and let the eye adapt for contrast and do that with a number of trained individuals. This is what I do.

For the particular comparison a few posts backs I think the errors are large enough to not require any rigid test setup. Anyone interested in a more thoroughly made test can do it themselves though, and they will arrive to the same conclusion -- C1has adjusted color subjectively to provide a look, their intention is not to keep color appearance as true to the original as possible.

The original question was if IQ180 provided more accurate color in C1 than A7rII because of a better sensor hardware which some seem to believe. By showing that C1 doesn't really have the intention to provide accurate colors (and also show how much different profiles can change color) I'm trying to show that we can't assume that color differences in C1 are due to hardware differences, because I don't think the IQ180 sensor really is any superior than the A7rII concerning color, but I do think that the color profiles for the IQ180 can be more well-designed, actually I'm surprised if they aren't. It would be a mistake by Phase One to make a better profile for a competitor.

When you pay those tens of thousands of dollars of hardware you may think you're buying into the hardware, but a very big part of the value sits in the raw conversion and profiles. Another pricing model which I think would better mirror the value would be to cut the IQ180 price to 1/4th and then put the remaining money into a special very expensive C1 software license ;). If pricing were like that I think photographers would start to think about trying to get more control of color rendition and actually make their own profiles, which few do today. It's a hen and egg problem though as there aren't really any good tools available. (My own project is a command-line only tool so it won't be broadly used, although it might become commercial with a GUI sometime in the future)
 
Last edited:

torger

Active member
The idea to profile for each condition is interesting, but it's just not going to happen until cameras can do it automatically on the spot. We're "stuck" with general-purpose profiles for still some time.

Linear response is great for reproduction photography, but that is about it. It's really bad at making a perceptually realistic rendering of a real scene, it allocates way too much of the scarce reproduction media dynamic range to the highlight range and make the overall scene look unrealistically flat and dull. While individual hues surely will appear accurate assuming a colorimetric profile, contrast will not.

So we apply contrast. Contrast will indeed change color appearance, even if we apply it in the luminance channel only. This can be compensated for though, but there are no standardized models to do so. It's not that easy to implement either. In the proprietary world there's a mix between doing this compensation (in proprietary ways that vary from manufacturer to manufacturer) and making subjective adjustments to create a look.

If you do it right a profile that applies contrast and compensates the appearance effects will make a more realistic rendering of real scenes than a linear purely colorimetric profile.

The secret to "good" camera color lies in how to deal with the relationship between contrast and color appearance. Most camera profiling software can't do it, and most photographers can't either, which leaves the power of color with the manufacturer. This is a very strong "weapon" indeed, as it can be the decisive factor for choosing say an IQ180 over an A7rII.

With spatially varying appearance models, as much of the more recent color science research has been focused on, the concept of a global contrast curve has been dropped. The idea there is that you start with a linear colorimetric registration of the scene, including absolute light levels, and then the eye-brain's response is simulated in an appearance model and translated to the output media. Those models are not entirely stable though, and I don't see that contrast curves concept will be dropped in the still photography world anytime soon. No raw converter on the market today is based on this concept, although you can to some extent do it in RawTherapee via it's CIECAM02 mode.

Commercially it's in any case better to employ the old-school model with contrast curves and subjective looks, as it makes it easier to differentiate cameras, and even lock people in. I've got requests from folks that want to recreate the Leaf look in Phase One digital backs, as they due to business decisions switched to P1 hardware, but are so used to work with Leaf's distinct look that they have a hard time making the same result with Phase One, despite the exact same sensor hardware. I'm not surprised, it is hard.

Although it would be quite possible to make "look translators", and icc to DNG profile conversions etc it's however not anything I will spend my valuable time on. I'm much more interested in creating something new than reverse engineering others. I've had to suffer through plenty of reverse engineering already.

I don't like lock-in and imposed looks so I have used own custom profiles for many years. However I have never been fully pleased with what the profilers can do, which lead up to making my own software for that.
 

torger

Active member
There are full spectrum color meters like the Sekonic C-700. Would that be useful?
I guess you mean that when you shoot you record the ambient light spectrum (scene illuminant) using this type of device, and then in the raw conversion step you could somehow provide the light spectrum and the profiles would adapt.

When you profile a camera having the reflectance spectra of all the test patches and the illuminant spectrum is somewhat useful as you then can calculate the exact tristimulus values of the standard observer and calibrate against that. If your light is decently in the area of D50-D65 the profiling result will be almost the same anyway, so I'd say it's not that critical. I do record the spectrum myself though when I profile a camera.

If we designed a new raw conversion pipeline that knows the color filter response of the camera (known as Spectral Sensitivity Functions, SSFs, in the scientific literature) we could generate new profiles "on the fly" by using the spectrum of the illuminant. If it really would provide relevant value though I don't really know. I doubt that it would, without adding a lot of further appearance modeling on top, as when color appearance becomes really complicated to match we are at lights that make other things happen like dark adaptation, partial chromatic adaptation etc. It would be a very interesting research project.

It's worth noting though that all established raw conversion software is sort of stuck in "film mode", there's a lot of new ways to model and render color, but noone is trying it. And it's not a surprise as it would be extremely disruptive to the existing color conversion pipelines. Don't fix if it ain't broken. Noone is asking for it, and I'm actually not sure how much value it would provide. I'd love to experiment with it myself though... but there's only so much time!
 

eleanorbrown

New member
I will also agree that comparing an upsized image file to one that has not been increased in pixel amount is not a fair comparison. This comes from one who, for years, has been looking for something smaller and lighter that will satisfy me regarding quality as much as my experience with Phase One backs has in the past...from the P25 on to my current P65+. Now I really don't need more than 40 - 50 megapixels these days, so one of the reasons I have the A7RII. I just did a comparison of a wall of fishing equipment and ski stuff hanging on the wall in my garage using both my Otus 55 1.4 and my Sony/Zeiss 55 1.8...same lighting, same iso at 50, same C1 setting, etc. I admit I'm not a pro at making controlled comparisons but once one sees the Otus 1.4 against the Sony 55 1.8 it become obvious that to really compare medium format digital one has to have an incredibly good 35mm lens as in the Otus 55. My Phase One is not here in Colorado with me so can't make comparison with that now. This is only to make the point if one is going to compare med. format digital to 35mm A7RII lens quality does come into play as does pixel count. Eleanor

Hi,

My take is really that once we upsize an image we are loosing image quality.
Best regards
Erik
 

jerome_m

Member
I guess you mean that when you shoot you record the ambient light spectrum (scene illuminant) using this type of device, and then in the raw conversion step you could somehow provide the light spectrum and the profiles would adapt.
I don't mean anything, I don't know how to use such a device. I just wonder if the device is useful and I ask.

It's worth noting though that all established raw conversion software is sort of stuck in "film mode", there's a lot of new ways to model and render color, but noone is trying it. And it's not a surprise as it would be extremely disruptive to the existing color conversion pipelines.
There is actually lots of work done presently in the movie industry. They have the following problem: they want to use LED lights, which need far less electricity. The LEDs look like daylight to the human eye, but not quite so to the cameras. This is very much a problem for make-up: the make-up artist chooses colours and they don't fit when the cameras run...
 

torger

Active member
I don't mean anything, I don't know how to use such a device. I just wonder if the device is useful and I ask.

There is actually lots of work done presently in the movie industry. They have the following problem: they want to use LED lights, which need far less electricity. The LEDs look like daylight to the human eye, but not quite so to the cameras. This is very much a problem for make-up: the make-up artist chooses colours and they don't fit when the cameras run...
There's a lot of development going on in the movie industry software-wise, if one wants to learn something new it seems to be a good place to be in. The problem with the LEDs are that they're not full spectrum, and thus the differences between cameras and eyes are exaggerated. I have experimented too little with profiling with peaky spectra to know what the possibilities and challenges are. In theory making a specific profile for the LED would make it work, but I'm not sure if those profiles are "stable", that is how well they will work for broad color ranges.
 

chrismuc

Member
Ok back to the original topic "too many pixel" vs. "way too many pixel" ;-)

I did comparison shots between the Sony A7RII (+ Metabones IV) and the Contax 645 + IQ180 at three +- equivalent focal lengths.

I uploaded full res jpgs, I tried to match the colors roughly (using standard profiles) in ACR, sharpening about 60%/0.5 pixel, CA corrected, shadows lifted up to 50%.

1. A7RII + Zeiss CY 21f2.8 @*f8 vs. Contax 645/IQ180 + Contax 35f3.5 @*f11

CF003962-Contax645+IQ180+35f2.8@f11-1200.jpg

Sony A7RII
https://dl.dropboxusercontent.com/u/18437364/pictures/[email protected]

IQ180
https://dl.dropboxusercontent.com/u/18437364/pictures/[email protected]


2. A7RII + Sigma 50f1.4 Art @*f8 vs. Contax 645/IQ180 + Contax 80f2 @ f11

CF003963-Contax645+IQ180+80f2@f11-1200.jpg

Sony A7RII
https://dl.dropboxusercontent.com/u/18437364/pictures/[email protected]

IQ180
https://dl.dropboxusercontent.com/u/18437364/pictures/[email protected]

3. A7RII + Zeiss ZE 135f2 Apo @*f5.6 vs. Contax 645/IQ180 + Mamiya 200f2.8 Apo @ f8

CF003976Contax645+IQ180+Mamiya200f2.8@f8-1200.jpg

Sony A7RII
https://dl.dropboxusercontent.com/u/18437364/pictures/[email protected]

IQ180
https://dl.dropboxusercontent.com/u/18437364/pictures/[email protected]

Btw., the most impressive result for me is the nothing but stunning IQ of the Zeiss 135 Apo.

Enjoy, Christoph
 

Dan Santoso

New member
Thanks for the comparison. I only download the last samplee, but it looks that the IQ has better DR than A7RII unlike the previous statement here.
 
Top