I understand the trade offs although I'm not sure necessarily that more pixels & DR are correlated - tonality yes certainly but DR? Maybe you can explain that one?
I understand the trade offs although I'm not sure necessarily that more pixels & DR are correlated - tonality yes certainly but DR? Maybe you can explain that one?
The diffraction limit of d800 of f8 is for some works a big negative point.
If a D900 came in the future with 50MP with limit of f5.6 any one want more pixeis??? in the 35mm format?
The future of digital photography MP its optic limitations, going to big sensor its the future.
We saw this happen in last 3 years:
- 35mm 1.5x crop sensor to full sizes sensors.
- Tiny 1/8" compact cameras sensors to 1" or APS size
- The DMF 645 - P20 (36x36mm) - to 1.1x and now almost full size 645 sensors.
The "gold fever of high MP" will end very soon, the future of digital photography in the next 10years will be in inovation the optics or the way sensors capture the light or dynamic focus in RAW software.
"Perfection is not attainable. But if we chase perfection, we can catch excellence."
Another clue is that on a per-pixel basis, the CCD in the P25+ actually has slightly better DR than that in the P45+. This is from Kodak's own datasheets and their consistent testing methods.
I'll answer the question backwards. Here's what would have to happen for me move from a dslr to a tech camera:
1. The price of the backs would have to come down 75%, or I'd have to have a whole lot more money.
2. Based on what I understand, the ergonomices would have to be much improved, especially live-view focussing (I come from a large format film background, and really appreciate a decent ground glass or its digital faccimile).
And of course, when and if these things come to pass, my decision would be made easier if small format hasn't closed the gap even more. The IQ2 series has already taken one of my requirements off the board: I need long exposures.
If 1 and 2 don't come along, I'll keep fingers crossed for high-end mirrorless cameras. The dslr really should be obsolete very soon. We're just waiting for digital viewfinder technology to be good / efficient enough. As soon as small format can get rid of the strong retrofocus lenses, there will no longer be fundamental differences between the formats. Only matters of degree.
Try Linhof's new ground glass, it's quite nice for the wides. When/if a decent live view becomes available (and affordable) I'll surely stop using the sliding back though, despite that I like the old-school feeling of using ground glass. There's much extra weight in the sliding back and poor light conditions is always a challenge.
I'm afraid that even when the mirror box is removed, which it will be (mirror box design is dead it just don't know it yet), we'll be stuck with strong retrofocus lenses because sensors have very poor response with low angles of incoming light. As far as I understand the old fat pixel CCDs are still the best in this regard, and all CMOS sensors are worse(?) than most medium format CCDs.
Symmetrical wides also vignette heavily and have small max apertures by design which make them less practical for live view.
The future will probably be much like we see on compacts today, wide lenses will have heavy distortion and then be corrected in digital.
I do like the current tech camera system with symmetrical wides, lenses with well-balanced correction in regards to sensor resolution (not too much correction, not too little) that no digital post-processing correction is required, and I hope it will continue to exist in the future but I think it will die. Rationally there's little value in having optical systems that deliver the "finished image" to the sensor, processing digitally to fix abberations and distortion is smarter. I don't like it as much though...
When all systems converge towards similar designs I think the larger formats will have problems to stay relevant in general photography. Today "being different" in various aspects I think is as least as important as the quality advantage.
#2 seems solvable. I'm surprised it is still an issue given the state of optical design ... or the potential for better digital transference to the ubiquitous smart phone.
I thought one of the IQ-II Backs now has long exposures. How long is "long"?
Small format mirrorless is tanking according to the makers themselves. Even though more traditional DSLRs are also down in sales, percentage wise people are generally 1) choosing them over the newer tech by a good margin ... or 2) choosing to keep their older DSLRs, which is a very telling factoid in itself. Add 1 and 2 together and you have a clear preference indication. Doesn't matter what we, the minority, think or do.
High-end photo gear is funded by general consumer sales, and an over-all 48% drop in one year is a pretty strong signal that what is being done now is not working. What innovations will turn that around is anyone's guess. I'm sure the lights burn late into the night in Japan these days.
One business article I read suggested that larger sensors, not more pixels, is the only real way forward ... IF the makers can reverse the years of "pixel wars" marketing ingrained into the public's minds and make size the new criteria. That, and connectivity to how people use images these days.
If implemented, the above bodes well for MFD since sensor size is a given. Personally, I'd welcome a FF 645 modern sensor of 30 to 40 meg that would revisit the charms and character of the old tech fat pixel backs, but with less or none of their short comings.
I too would love a FF 645 sensor. think of all the lenses we could choose from if "fat backs" were in production - sensors evolved, but not all lenses did.
Foveon sensors, IMO, will play a big part in the future of photography.
Last edited by johnnygoesdigital; 20th August 2013 at 08:31.
Then it gets worse, factor in the current state of image processing apps, and the cell phone is very attractive option. My wife and daughter have gone to classes on cell phone processing -- forget Hipsta and Instagram, you can manipulate any photo and upload it to wherever via PhotoWizard.
As a family, we put out a calendar every year and give them to friends. Mostly it's places we've traveled to and each of us (5 total) contribute at least one of the images. Not surprisingly, I usually supply the extra image or two -- and this year one of those was taken with my iPhone. I now use my iPhone instead of a P&S, and have got to say that they print up GREAT at 8x10 calendar size...
I am now printing less and less, but am supplying digital images and output services more and more. I see a future with the need for big cameras and more pixels diminishing even further. Sorry...
Story #2. I had lunch with a fashion photographer several months back whose images are generally printed large. He always shot MF. He'd cull his images and present a set to his client. He said invariably a client will choose one of his less than technically perfect captures because of something else they like in it -- and they never seem to notice or simply cannot see the nits. So he decided to shoot with a DSLR on a shoot. His client loved the finished product and of course it was a ton easier on the photographer. His conclusion is his clients couldn't see the differences, or if they could see them, they didn't say anything or didn't care --- and thus he now shoots with a 36MP DSLR because it's so much more convenient for him.
Story #3. Marc and I met on one of the original Leica internet forums about 15 years ago (seriously!). At the time, digital was in it's infancy -- and we all knew film was far superior. (And we both still love it I am sure -- I don't shoot it anymore, don't know about Marc.) A short couple of years later, Marc was selling me his 4MP Canon 1D as he upgraded to a higher MP Kodak I believe. And it wasn't long before he was shooting a 16MP DB on MF. Anyway, point is at about this time we both started predicting the end of film. Back then we both figured most films would be gone by 2020, except maybe Tri-X (seriously). We were probably a little over-optimistic about film's lifespan. And now we may be similarly overly optimistic about HR digital's future...
"Perfection is not attainable. But if we chase perfection, we can catch excellence."2 Member(s) liked this post
The other problem is everyone wants something small. We see it daily here folks going to mirrorless more often than anything else. Fueled by great little Sony cams and such. Leica products as well and with Leica maybe pushing the size factor but the quality good enough and smaller than DSLR to make it worth it. For Pro use I have no choice but as a hobbyist you can pick whatever your comfortable with and the trend is no bigger than a M cam. The other issue and been seeing this for several years clients just don't care as much as before. It's a accepted fact quick ,fast and cheap. It's sad but most clients just don't really care after those 3 criteria. Frankly anymore the only time it's a pleasure to me is shooting for me and my needs. I will always shoot the best job I can for clients but lets be real here its about the money. I sold out 40 years ago on that fact. I miss a tech cam but that is more personal to me than business related. I'm taking the day off and go drive north to shoot and cool off. The heat has got to my brain I need some chilling. LOL
I'm walking out the door with not one Nikon lens but a Leica, sigma and a Zeiss. Now that's a whole discussion on its own. I would rather walk out the door with a 36mpx M10 with three lenses that are small , lightweight and good. Oh well hitting the road and something to thing about or not.
I know one thing for sure I think less about gear anymore and more about content. Honestly its the only thing that will keep me above the bullshit in this business.
Photography is all about experimentation and without it you will never learn art.
www.guymancusophotography.com2 Member(s) liked this post
I have both a Nikon D800e and a Hasselblad CFV-39, both about 36 Megapixels.
They're two different cameras.
That was true with film (which I still shoot in both formats) just as it is now with digital capability.
The Nikon is a street camera, for news, candids, "real life images", etc.
The Hasselblad is for slower, more disciplined shooting.
If I want serious resolution I shoot 8x10 film.
It'll blow the pants off of any digital camera.
I don't care what gear I have.
Things I sell: http://www.shutterstock.com/sets/413...html?rid=61105
I am sure there are sleepless nights in the camera industry right now. They may have to go back closer to the film camera business model of keeping a model in production for longer. And there are benefits to that. Not that people won't complain, but releasing models faster never stopped people for complaining either.
The last time I said anything like that, Don thought somebody hacked my account.... Honestly, though, in business it's always about content.
Hey---did anybody see that fb post from B&H? They've got the Leica M Monchrome in stock....
This statement usually comes from people whose personal methods differ so significantly from standard practice that they're unable to duplicate lab results, so they claim the lab is wrong.
Film manufacturers go through thousands of test exposures, using controlled conditions for both exposure and processing that greatly exceed the accuracy commonly encountered in the field.
How, pray tell, can laboratory measurements be somehow inaccurate, misleading, or wrong?
Dynamic range is a crude estimator of performance. It only tells you something about the sensor at two intensity points, the extremes of its operational range. It says nothing about what goes on in between - how much signal, how much noise, the relative contributions of different types of noise, the wavelength selectivity of the signal. It's one of those "never mind the quality - feel the width!" metrics.
One needs to plot the full noise model of the sensor to get a more complete picture, and even that is not everything: it should be repeated at different exposure times, temperatures, and ISO settings.
If I walk into an all-you-can-eat buffet with an empty stomach from fasting, and a determination to fill it to bursting point , those are the two endpoints of my stomach's "dynamic range". How I progress from empty to full can take many paths at the buffet; multiple bowls of porridge would do the trick; as would a fine-dining banquet of Michelin-starred delicacies. I know which path I would pick!
2 Member(s) liked this post
+1. Everyone likes to quote the numbers, but few understand the significance. And like Ray, I also believe in standardized testing. But that only takes you so far, especially with applied photography for creative images.
I agree 100%. And am waiting for it to happen.#2 seems solvable. I'm surprised it is still an issue given the state of optical design ... or the potential for better digital transference to the ubiquitous smart phone.
They say it can go up to an hour, I think. But I only need several minutes, and it seems to do this easily. This the first big wish checked off of my list.I thought one of the IQ-II Backs now has long exposures. How long is "long"?
I'll take your word for this, but it doesn't really speak to my point. I think technologically, the SLR is going to be made obsolete by a class of mirorless cameras that has yet to be introduced. If they can create a digital viewfinder that's as good as the best optical ones, there will be no downsides. Speculation, of course, but I just don't see the justification for continuing with this century-old, Rube-Goldberg arrangement of mechanical mirror boxes and compromised lenses.Small format mirrorless is tanking according to the makers themselves.
Moving past the SLR model would have the same implications for both small and medium formats.
There is a long background story on how this is accomplished, but if you're just looking for the spec - there it is.
Unfortunately for photographers this definition only loosely correlates to dynamic range as defined by "how much shadow-to-highlight scene range can I capture in a way that will be pretty/natural/aesthetic"
Two camera systems can have identical dynamic ranges as determined by algorithm but have very different dynamic range regarding how much of the scenes highlights and shadows can be pleasantly reproduced in a final print.
- difference in character of noise (gaussian, uniform, clumpy, color or monochromatic) which makes the noise more or less pleasant for a given person's preference ("film like" vs "digital/artifacty" noise can be two descriptions ascribed to two images which have - technically - the same amount of noise as numerically measured)
- linearity of color (do the shadows bias towards a certain color; do all colors respond similarity as they fall into shadows)
- tonal smoothness (is there any feeling of posterization or other abrupt transitions or do the transitions from deep shadow to quarter tone smooth and pleasing?)
- roll off into highlights, are near-blown tones rendered as a smooth decay into no-information zones or do they create strange color/tone artifacts
It's not dissimilar to saying a rock concert, a baby screaming, and the engine-roar can all have similar absolute loudness in decibels, but I think we'd all agree they differ in their pleasantness to listen to.
Moreover DXO and other metrics I've seen published (like the spec sheet from the sensor manufacturers) only tell you the range of raw data during the primary capture without post processing. The application of the dark frame (the loose equivalent of the iPhone 5's ability to use a second microphone to listen for ambient noise to increase the clarity of the signal) and the highly-catered debayering and detail-extraction and characteristic-noise suppression/shaping of the combination of a Phase One or Leaf raw file and Capture One is not taken into account. Nor are the cross-effects of non-blown-channel reconstruction can do to subject matter which is blown in one channel but not another (see also: many a blue sky) for which linearity of color and purity of color response (dependent on, amongst other factors, the spectral transmission characteristics of the bayer pattern used) helps/hurts various cameras. I don't care (other than abstractly) what the 1s and 0s of the raw file are; I care what can be extracted and used in a pleasing way in the raw processing software. DXO would claim that a IQ180 has the same dynamic range whether you process it in C1v6 or C1v7 and they wouldn't be wrong in the strict sense (the back did not, in fact, change it's response) but their answer would not be relevant to someone taking pictures and processing in both C1v6 and C1v7 (the user of v7 would find they could consistently use parts of the scene further into it's highlights and shadows).
Finally DXO tends to measure backs/cameras when they are first released (not always the case, sometimes the test comes years after release). And anyone who has owned a P1 back or Leaf Credo back from the first day of release (my specific area of greatest experience, this may be true of other backs) knows that the noise/dynamic-range has improved as Team Phase One continues to develop and improve the firmware that controls the sensor exposure, readout, and dark frame routines. This is not a big deal, but another example how the question they are answering is not necessarily the question a photographer is asking.
My former life was as a programmer for a data analysis suite for lab replication and analysis of field vibration measurements correlated to acoustic recordings in the automative industry for the purpose of improving the experience of a driver/passenger vis a vis strange squeaks and rattles experienced on given road surfaces. So lab measurements and the mentality of variable isolation, numeric representations of real world phenomenon, and the scientific method are not foreign to me. But even in that job, all of our effort was to identify potential problematic areas/scenarios/conditions - the final analysis was always to put a person in an actual car, replicate the appropriate conditions, and then ask them "how annoying is that squeak from 1-10" or "is squeak A or squeak B more annoying"? In any number of fields quantified lab measurements are of immense value but they are very rarely the entire picture (pun intended).
The story is rarely as simple as a few numbers .
This is one of the primary reasons why we emphasize real-world evaluation (rentals, demos, raw file catalog) so heavily. If someone wants to know how much dynamic range a particular back has my first instinct is always to put said back in their hand and tell them to go shoot the pictures they normally would and see how the camera/files handle. Scientific? Not really, but in my experience it gives the customer the best understanding of what they should expect from the system once-purchased.
To Doug's point, I think it would be helpful if manufacturers and testers were more clear about their dynamic range methodology. It doesn't have to be too complex; we just need a standard that makes sense.
DR is always measured with some notion of acceptable noise. You can always add more gain to the shadows—the question is, at what point does the noise become objectionalbe?
People call DXOmark innacurate. They aren't; they've just chosen a S/N standard that is much lower than what most photographers would find acceptable. As a result, they tell me my camera captures 14 stops of DR. In my experience, it captures 10 stops. Maybe 12 optimistically.
This is just a disagreement over standards. If you were using my camera for surveillance, you could indeed dig useable details from shadows in a 14 stop DR image. You could not get shadows that look nice in any conventional sense.
1 Member(s) liked this post
Let me just ask directly: do you believe that with a given signal-to-noise ratio that all signal and all noise is equal?
As counter examples to that theory:
- More accurate (narrow band) color discrimination does not show in a s/n analysis, but aids greatly in highlight recovery.
- Color drift in shadows has minimal effect on s/n but makes shadow content (like the shadow side of dark green foliage) far less useable
- An image with clumpy shadow noise of a given s/n ratio will often require a larger insertion of post-processing grain to be pleasing than an image with gaussian noise of the same s/n ratio
- stuck pixels have a large impact on s/n (depending on the outlier-exclusion methods used) but have minimal impact on the final image if they are mapped out properly in post
Last edited by paulraphael; 20th August 2013 at 14:19.
Also, it is also not only important the amount of stops in the dynamic range (given an equal standard of measurement) but also the exposure latitude which what I think matters to photographers. The ability to make corrections or exposure changes in an entire image or parts of an image. It can be pushing up shadows or pulling down highlights. Like it was mentioned it is a very hard thing to standardize.
Here is a table I found which shows a very important aspect of where in the exposure range is that latitude located. As you can see, film (color neg) still reigns supreme in highlight recovery potential even though if you measure, in stops, the latitude of some of the digital cameras they get very close to matching the total exposure latitude of film. One can say that digital rules the shadows and film the highlights. (* I don't now the source of the table or the methodology used to get the data)
Yep, cell phones are murdering the P&S. DSLRs seem to be holding on a bit better because some people are used to a bit of creative effects from the various lenses ... but I'd guess that'll be short lived. Unfortunately, as of now, mirrorless isn't taking up the slack:
Mirrorless Cameras Lose Their Shine? NYC and USA Today report with Panasonic-Olympus analysis. | 43 Rumors
CIPA report on 2012 sales in Japan: Fewer Mirrorless cameras produced. | 43 Rumors
Unless some revenue stream replaces that from P&S and diminished DSLR sales, (and as of now lackluster mirrorless), then it'll be interesting to see how it'll effect prices of what is left.
The "tired" factor mentioned is a real possibility ... a notion also touched upon in this blog post, which is a bit over the top IMO, but an interesting take on current events:
The Visual Science Lab / Kirk Tuck: Has the bubble burst? Is that why camera sales in N. America are down by 43%?
The X factor here is some as of yet unknown technology totally changing the game .... and like the death of the P&S, it may come out of left field from other than a camera company.
As of now, I'm perfectly fine where I'm at, and haven't bought a new 35mm DSLR lens for almost four years, and three years for the S2 (except the upgrade of the S to CS). I cancelled the M240 because I think the images are homogenized white bread (IMO)!. I will NOT return to a manual lens on an AF DSLR to squeeze a bit more IQ out of a sensor with more resolution than I need 95% of the time
Commercial studio work is all but history, I cut weddings to a handful a year, and only if they are high ticket. I'm working more with lighting, so the S2 with CS lenses is an important difference from other choices.
Content isn't a new focus for me, it has always been the focus. Getting off the gear train, the fussing with every little aspect to get the most from it, just makes keeping that focus easier to accomplish.
Now lighting gizmos ... that's a whole other subject ...
"Perfection is not attainable. But if we chase perfection, we can catch excellence."1 Member(s) liked this post
Help!!!! need clean up in aisle Leica . Someone just spilled the koolaid!!!!
And I kept waiting for Marc to post some examples that would make me
desire one...I have turned down four offers over the past few months for
an immediate body...in stock ready to ship.
Now if they will just put the shutter and improved RF in a M9P for me.....
I actually picked up a refurb Silver M8.2 with 2k shutter exposures to lessen my impulse to buy one......
It would be best to avoid the term "highlight recovery." In digital there's no such thing. A digital sensor captures fully separated detail up to the clipping point, and then past that ... nothing.
Recovery is an illusion presented by the raw converter. The default settings cut off some of the highlights and shadows. But those cutoff points are really just arbitrary, based on what the creator of the setting thinks will make a typical image look good.
Yes I've coded some highlight "recovery" algorithms, and what you do is not recover, you reconstruct ie guess values, make something that (hopefully) looks good. "Highlight recovery" is a misleading term, but it sounds better than "highlight guessing".
Standard robust raw conversion as found inside cameras typically set the clip point to when the first channel clips, which typically is green. So the in-camera jpeg will show white when the first channel clips, this means that there's a lot of highlight information not shown, but all highlights you see are made from complete RGB information, no risk for funky highlights. What Lightroom, Capture One etc can do is to show something as long as there is at least one channel that is not clipped, but this means that you don't have full color information so you make some guesses or simplifications. The simplest way is to desaturate highlights towards neutral gray, so you have structure in the highlight but no color. Since the highlight is generally surrounded by color the you often get an illusion that the highlight has color anyway.
Most raw converters hide what the underlying raw actually contains and do some preprocessing before you even touched the first slider. This makes the software more user-friendly and "film-like", but hides from the user how digital photography actually works. Many do not know about this and think they see the "raw thruth", and from this there's a lot of myth concerning highlight recovery and non-linear response and other things that simply don't exist in digital.
(Oh well, non-linear response is coming, for cell phone cameras there's recently been a release of a sensor that compresses highlights at capture and thus gains dynamic range.)
In-camera histograms differ from manufacturer/model how they choose to show clipping. Some leave some space above it, that is don't show the real clipping point. The histograms may be luminance only and thus show an RGB product and not show individual channel clipping. Learning how the histogram works for your own camera is worthwhile if you often face DR challenges and need to expose optimally. So far I have not come across a camera that have a histogram that gives you the full information (probably because those histograms would be less "user-friendly"), but even if not you can come pretty close to optimal exposure if you just know how it works.
When companies like Phase claim 13 stop DR, 2-3 stops are bunched near the top and 2-3 near the bottom, i.e, in the dark dark shadows and bright/specular highlights.
Monitors and printers cannot distinguish say between 251 and 255 or 0 and 5. When you move the sliders, these ranges get separated out. So the data is still there, but can be pulled into a visible range where we can see them. The RAW data is still available in the file to do this, i.e., the RAW converter is not "making up" new data.
I'm sure those far more educated on the topic can comment on this. From a user perspective, I have tried underexposing/overexposing and compared to highlight/shadow recovery and it does work as in the results are comparable within a reasonable range of 1-2 stops. Of course, noise and the related loss of detail from the noise becomes an issue quickly.
Again, just my experience, I could be wrong.
Where we disagree somewhat is that I believe that RAW converters should be kept out of the assessment of camera sensors (or at least only introduced in a second, parallel stream of assessment which does not replace the primary one). RAW conversion software like C1 introduces a large degree of optimization-biasing towards particular designs, not to mention an impossibly huge parameter space. Not a level playing field.
Doug would say that it's the final photograph that matters, so who cares what the RAW converter did to achieve it? As he said above: "I don't care (other than abstractly) what the 1s and 0s of the raw file are; I care what can be extracted and used in a pleasing way in the raw processing software."
While that "the end justifies the means" type of argument is fine for comparing the aesthetics of photographs, it is invalid for comparing the engineering of camera sensors. We have to keep the two discussions separate.
But we also need more detailed and informative metrics which dig deeper into the available data, like photon transfer curve analysis, to be the norm when assessing the engineering.
When most people talk about camera sensor evaluations, they refer to DxOmark. Now I've long been astonished that DxO don't even touch on long exposure (dark noise) measurements. That too needs to be added to the engineering assessment mix.
Ray I'm a total who cares how it is done in the raw its what is delivered that counts. Now I say that from a shooters seat not a scientist. All I really care about is what I can draw from the files in post. This is why I'm not the biggest fan of DXO ratings as it really dont tell us what happens in post. Now its a nice starting point and a decent way to compare things but people need to realize it is NOT the ultimate truth. It may say the DR on a certain back is 10 stops but a raw converter even at default levels it can be 12 lets say. As I usually say its part of the puzzle to determine ones sensor like anything else but its not the final word and given all the variables we may never have a final word. It's the age old saying I don't care how you got the shot as long as you got it. To me its all about what I can draw out of the file to a final state for printing or delivering to a client.
At least this is the way I look at digital. There really is no final but what you consider your best effort at the end of the chain. DXO is in the middle of that chain not the end
Now if you take the engineering seat than what I just said may not fit that criteria of evaluation. So from my seat a shooter the end justifies the means. But I do understand what you are saying but how much weight do we put on it is my question,
Maybe a good analogy is does the end file look like what was on the LCD when you shot it. Interesting thought
Okay "guessing" was a bit rough to say, but more to the truth than "recovery", because the data is clipped and you cannot really know what the value was. However, you can thanks to knowing things about how highlights usually behave make an educated guess, at least if you have one or two of the three channels left unclipped. Most raw converters don't take the guessing too far though, but instead only recover/reconstruct/guess as much you can do with high chance to make something look okay, and blend the rest towards the whitepoint. It would be pretty safe to guess that the center of the sun should be yellow, but few raw converters "recovers" that, but instead let it be white in the center.
It's quite hard to make 100% effective use of the top stop, as one channel typically clips one stop ahead of the others, depending on the color of the highlight and RGB sensitivities (G is often considerably more sensitive, but on my Aptus 75 the channels are actually quite well-balanced for daylight which make it a bit more efficient of utilizing it's dynamic range).
On the bottom stops there's no issue of clipping channels, there it's only about noise. The channel clipping issue really complicates highlight handling.
And of course, the raw converter always needs to blend towards something, typically the whitepoint. Raw conversion that doesn't guess any values of clipped channels will still modify the highlight information so you get a nice blend towards the whitepoint, typically by some sort of gradual desaturation.
The only evaluation that make sense for a photographer is one they do themselves. There is no objective test for that. Buy the camera and test it and see if you like it.
Objective testing is very useful as it points to problems. It can never point to how good something is, at least at an aesthetic level. DxO Mark and DPreviews gives a comparison baseline. The problem is people don't know how to evaluate the information given.
But one thing I do not trust, and I do not mean any offense, is when a photographer takes a picture and then states a camera has x stops DR. That is guessing as they never do any measurements to even know what the DR of the scene is. And when you consider the luminance range of an average daylight scene is 160:1, most scenes don't exceed the DR of many cameras. What I get from that statement is the photographer is happy with the response and it has proven equal or better than perhaps his current gear. What I don't know is how that translates for the conditions I work under. I also have to factor in what the photographer's taste is as that will bias the test. So while I think personal reviews are good, they present more questions than they answer.
If DxO states a camera has a DR of 13.5 EV and my camera has 12.8 EV, I can probably imagine I am going to find DR OK for me. That does not mean it will work like my camera--I don't know if the DR was gained in the shadows or highlights. I don't know systemically what happens to the file. I am going to have to get the camera and learn how it sees--I never expect one camera to work like another. Cameras are like a box of chocolate...
Relating the single-metric DxOmark "DR" to real usage: if the noise is nice and random, as on all(?) MFDBs and most non-Canon DSLRs you have something useful about 3 stops above. Ie 13 stop camera provides about 10 stops useful information. It's a bit a matter of taste though, but the engineering DR says that at the bottom stop signal is equal to noise, and that's, well, unusable. Going from saturation and down, any decent camera, even Canons, has fairly noise-free colors 7 stop down, ie if you push something in post within that range you should have no real problem regardless of what camera you use. Below 7 stops from saturation real differences between cameras start to show, both in terms of noise and loss of color accuracy.
However, the discussion about absolute values in stops is hard to do as there are no good commonly available tools for showing in the raw files how far from saturation a particular area in the file is. Lightroom, Capture 1 etc does so much pre-processing before the first slider is even touched that you cannot use those tools for correct file performance comparison. When I've done comparisons I've used the command line tool dcraw and own custom software to really look at the actual raw data what is happening before all conversions are made. When comparing two systems one also needs to know how to make a correct ETTR exposure, histograms can behave drastically different between cameras, so you may heavily underexpose one file if you assumes the histogram works the same as the other camera.
If we look at the best performers, ie D800 and IQ180, the differences between those cameras in terms of dynamic range is so small it should be irrelevant for any real usage. My digital back which uses an older generation sensor has noticably less DR, but still makes little difference concerning image making. I use grads in backlit scenes with my digital back, and I'd still use grads for a D800. To make a real practical difference I think we need to wait until we get those non-linear response sensors which compress highlights on sensor, meaning we can use longer exposure times and actually capture more photons in the shadows (which I do when using grads), when that is available I'll probably ditch the grads.
How much DR performance affects you also depends on what your post-processing style is, I like to have a contrasty "slide film" look of my landscape photography, and then you don't push the darks too much. If one however likes the painterly tonemapped look one will push shadows more and notice differences between cameras more.
http://www.hakusancreation.com2 Member(s) liked this post
Photography is all about experimentation and without it you will never learn art.
www.guymancusophotography.com1 Member(s) liked this post
Amen. I wish cameras tailored their histograms and clip indicators to the actual raw file. At least when you're shooting raw! It's very frustrating to see such a detailed display of information and have to treat it as a dumb approximation.In-camera histograms differ from manufacturer/model how they choose to show clipping. Some leave some space above it, that is don't show the real clipping point. The histograms may be luminance only and thus show an RGB product and not show individual channel clipping. Learning how the histogram works for your own camera is worthwhile if you often face DR challenges and need to expose optimally. So far I have not come across a camera that have a histogram that gives you the full information (probably because those histograms would be less "user-friendly"), but even if not you can come pretty close to optimal exposure if you just know how it works.
If I shoot a sky with white clouds and battle DR challenges I know I can clip one or two channels as the raw conversion can make a very natural and good-looking reconstruction of the remains, that's why I'd like to know how many channels that are clipped.
What I do in practice today in difficult conditions is that I usually bracket a bit rather than thinking a long time about the (approximate) histogram, ie I make one "safe ETTR" and then one or two with more exposure and then I pick in post which one to choose.
The new generation of sensor that uses every pixel for metering might change this.
I like your suggestion for the clip blinkies.
The D800 is a super capable camera indeed, but once you pair a MF back with the latest SKs or Rodies it's *really* hard to look back. My entire Nikon kit is up for sale now.
My advice would be you rent or borrow a MFDB, spend some time with, possibly make some direct comparisons with a D800/E, and then decide if you enjoy the results and shooting experience (on a tech cam, AF body or else). Just don't invest huge amounts without trying first.
I think that most M users are much less touchy these days, because it's obvious that (notwithstanding some correctible WB problems) the camera really does perform very well - even the unsatisfiable Tim Ashley seems to be satisfied.
Marc was very vocally critical of the initial iterations of the S2 as well . . . .
Of course, it doesn't mean the M is a competitor for MF backs - anymore than an MP was competitor for a MF film camera back in the film days. . . . . or the D800E is a competitor for current digital MF. Big Sensors Have Advantages (just as small sensors do).
Personally, I think that the operational parameters of good modern cameras has more effect on our photography than the camera's image quality, and I think that's true of everyone for whom the image matters most . . . . .
. . . of course, if you're shooting architecture, fashion, glamour - then probably MF provides the best operational parameters, but if you aren't?
Just this guy you know