The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

DXO mark on G1 and G3

RichA

New member
I can vouch that the G3 is less sensitive than the G1. I noticed this right off the bat and it makes sense, the pixels are smaller. But despite that, the G3 still has a significant advantage when it comes to noise, at equalized ISO rates. I think about 1/2 stop ISO to ISO.
 

pellicle

New member
But despite that, the G3 still has a significant advantage when it comes to noise, at equalized ISO rates. I think about 1/2 stop ISO to ISO.
I wonder if this is because having more pixels enables the signal processing algorithms to have more 'points' to access in 'cleanup'

what about in raw with no signal processing?
 

tomh

New member
I wonder if this is because having more pixels enables the signal processing algorithms to have more 'points' to access in 'cleanup'
I think you have a pretty good point. But there is more.

The combination of more pixels and more complex processing must really demand a lot of calculations. You could not do this with an older, slower camera computer. The G3 inherits the faster GH2 processor.

Also, I would not be surprised to see battery life take a hit if the computer really goes to town on processing an image in-camera. I think the G3 has a smaller battery due to size constraints, so this might get interesting.
 

pellicle

New member
Hi


The combination of more pixels and more complex processing must really demand a lot of calculations. You could not do this with an older, slower camera computer. The G3 inherits the faster GH2 processor.
yes, now that you mention it, but I wonder how much this can be done in analog side with specialised signalprocessing stuff ... rather than the brute force done on digital side with binary maths?

Still, looking at how things go I see some pretty darn hairy stuff being done and all the while people keep saying you can't get more from less (and I'd also be one of them) the do keep sucking more out of the data than I'd have thought possible.

back some years ago I put this page together (before I started a blog) and if you look at the green channel RAW its impressive the difference between the processed JPG and the RAW ...




go about half way down the page, and hover mouse over to swap in and out ... check out the number on the apartment ... the camera made it legible, but I couldn't with any software I had at the time...

and yes, I do think that RAW is processed to some extent before its written to the RW2 file (after all, it was analog somewhere right?)
 

tomh

New member
Hi

yes, now that you mention it, but I wonder how much this can be done in analog side with specialised signalprocessing stuff ... rather than the brute force done on digital side with binary maths?

...

and yes, I do think that RAW is processed to some extent before its written to the RW2 file (after all, it was analog somewhere right?)
What I know comes from having gone through code for DCRAW, which is an open source program that implements processing to convert raw sensor data to a finished image. DCRAW gives you get a good sense of what happens in any image conversion. Here is what I learned. It applies to a Nikon raw file, but I suspect the same steps are present in Panasonic RW2 files.

When you shoot raw, sensor data is just copied into the raw file and data on camera settings, white balance, etc is added to the front. But the raw file also gets a jpg version tucked in, which allows the camera to display the image and which becomes the thumbnail picture when you import the raw file into a computer. So the camera computer does have to run through very dense code to create the jpg thumbnail in a raw file. Add more pixels to an image and the camera computer gets very busy.

When you shoot jpeg the camera computer processes the same thumbnail image that the raw file would get and it also processes a higher definition jpeg image according to how you set up the camera for sharpness, color intensity, etc. Both thumbnail and high def image are tucked into the final jpg file.

Creating the second, high def image is a LOT to ask of a camera computer and I suspect it takes every shortcut possible. These shortcuts end up losing fine detail, so what you saw is consistent with camera processing limits.

Having seen the DCRAW process, I resolved to shoot everything in raw and let my much bigger Macbook Pro computer do all the processing without taking any shortcuts. The laptop gets pretty hot when it processes the raw file, so you know a lot is going on.

Incidentally, the DCRAW program offers an option to use an even more complex formula for converting sensor data to a finished image. When I pick this option each image takes ~5 minutes to process, but the final image is much better than the standard one. I only did it a couple times, but the test shows that image quality can improve if you put enough processing power on the conversion.

I am anxiously waiting for the G3 to be stocked. It looks like a pretty nice camera, light, easy to use and having most of the image quality of a bigger DSLR.
 

pellicle

New member
What I know comes from having gone through code for DCRAW,
yep, nice program, use it myself for making comparisons at a 'level playing field'

When you shoot raw, sensor data is just copied into the raw file and data on
this part we are uncertain about. For a start the sensor data must undergo Analog to Digital conversion, and there is no reason why there could not be other signal processing at this point.

I guess you're aware of the compression applied to NEF files?

camera settings, white balance, etc is added to the front. But the raw file also gets a jpg version tucked in, which allows the camera to display the image and which becomes the thumbnail picture when you import the raw file into a computer. So the camera computer does have to run through very dense code to create the jpg thumbnail in a raw file. Add more pixels to an image and the camera computer gets very busy.
yep, that's for sure
 

tomh

New member
...this part we are uncertain about. For a start the sensor data must undergo Analog to Digital conversion, and there is no reason why there could not be other signal processing at this point.

I guess you're aware of the compression applied to NEF files?
I don't enable raw file conversion, but yes that would add to camera image processing. But there is a compression that happens after A/D conversion. Again, this is from reading DCRAW code and not knowing exactly what the camera does.

The camera computer streams the analog light intensity values out of the sensor, sends the stream to the A/D converter and then reads each digital sample into memory for processing. Each sample has a small time window for all processing to complete. The processing is to apply a non-linear conversion formula, which takes a lot of time. A compromise is to do a linear approximation, but this has the effect of compressing the light intensity value.

A faster computer could do the proper non-linear conversion. For sure, adding more sensor sites shortens the per sample processing window and makes compression more likely on slower computers. I am encouraged that the G3 inherits a faster computer from the GH2.
 

Pat Donnelly

New member
Moore's Law rules!

The EP-3 now markets two processors. Soon we will have 16???? All improve HDR and make a smaller sensor very viable. How much smaller, eh?
 
Top