Niels Knudsen, known as the Image Quality Professor, is a digital pioneer whose debut in the digital photography business occurred more than 25 years ago, specializing in digital camera technology and image processing at Capture One. Niels was the main driver behind the development of the world’s first workflow-oriented RAW converter software and image-editing software (Capture One). In short, Niels and his team shaped the idea of “image quality” and post-processing as we know it today. Not only do we go deep into the history of working on a high-quality RAW conversion, but we also discuss with Niels the ideas and technologies of today.
Here is an excerpt from this fascinating interview:
How has the idea of RAW conversion and post-processing changed over the years?
Since we introduced our first one-shot camera around 1995, our methods for doing a high-quality RAW conversion were already outstanding when compared to the competition. The biggest challenge was computing power. The Bayer conversion of the RAW file took most of the processing time, which at that time was almost four minutes for a 6M file. Noise reduction was nearly nonexistent, as the processing time itself was already such a challenge. Since then, computing power has increased dramatically, and the ability to use GPU parallel processing has allowed us to develop and use far better algorithms for Bayer interpolation and noise reduction, as well as complex tone and colour mapping.
Today you can do really amazing adjustments on your RAW file in Capture One.
What is the greatest challenge you face from a software development perspective when working with modern RAW files?
Noise reduction used to be a big challenge, but now sensors are much better and available computing power and our new noise reduction algorithms work very well.
The size of many new sensors has increased. This is not a problem if you have all your files locally. But large RAW files and the Cloud is still a challenge.
What are the top three contributors to image quality when capturing the image?
The number one contributor here is the image sensor in the camera and how it is being driven. For landscape photography, you want your file as clean as possible, so it is important that the sensor can deliver a super clean base ISO. The number of pixels is, of course, also a parameter, but I would rather have a little lower pixel count than a noisier base ISO.
Then comes the lens. For landscape photography it is about how your lens performs in the range from f8 to f14 that matters. It is important that you can find a focal point that gives you a sharp image from centre to corner using full depth of field.
Third, it is your camera system’s ability to minimize shutter vibrations during exposure. If the camera cannot control the vibrations, then not even the biggest tripod can hold the camera still enough to ensure vibrations down to 1/10 of a pixel and you will end up with soft images.
To what extent does camera hardware and signal processing influence image quality? What about temperature and heat and similar issues?
Sensors are never perfect, so how you handle defects is very important – that’s where the processing comes in. Whether it takes place in the camera or later in the processing software isn’t a dealbreaker. Typically, you can use more complex algorithms for defect handling, etc., using the processing software running on a personal computer. With the latest sensors, new possibilities are available, such as frame averaging. Those options are available only when camera hardware and image processing work together.
Sensor temperature and noise levels go hand in hand, so it is important to optimize power consumption to a minimum for a given sensor to ensure that the sensor runs at the lowest possible temperature. All cameras heat up when running live view or recording movies. This heat will have a negative influence on the image quality. When operating the sensor at base ISO this extra noise isn’t something you notice but if you challenge the sensor by long exposure times and high ISO settings, then you will see the negative consequences of a warm sensor.
When they hear “image quality” most people think of pixels, noise or sharpness. What are the key variables from the point of RAW conversion or editing software?
The quality of the individual pixels is important. With super clean and sharp pixel details, you can simply enlarge your image more or print bigger. But high-quality individual pixels alone don’t do it all. Things that everyone sees, no matter how big you print or display your image, are the colours and the tone mapping.
In Capture One we have always believed that colours are super important and that they should look both as accurate as possible and also pleasing. It is important to understand that when you are photographing the real world with 3D objects, then the dynamic range and colours are much bigger than what can be shown on a print or a monitor. So both the colours and the dynamic range have to be compressed. This means that it is impossible to reproduce all colours and tones 100% correctly. Some colours and tones have to be compressed and this has to be done in a way that our human eye believes looks right. We don’t buy that 98 percent of the colours in an image are 100% perfect if 2 percent are standing out looking strange.
Which parameters are the most difficult to tune?
I believe that colours are the most difficult to tune. Different sensors see colours differently, and simply using a 100 percent automated method for colour tuning has never lived up to our expectations of colour quality. Two cameras can perfectly match colour patches on a test target but still render the images from real life differently. So some manual tuning is needed, and some compromises may have to be made to have a colour reproduction that looks both right and pleasing.
You will find this entire interview in the January edition of the Medium Format Magazine.