In the last few months, I’ve been examining ways to get the highest signal-to-noise ratio (SNR) in digitally-captured images. I started with ETTR, then explored the issue of optimal in-camera ISO setting. Now I’d like to explore how to get your camera punching above its weight – producing images with the SNR of a larger sensor.
There’s no free lunch here. A major limitation of the technique I’m about to propose is that it won’t work unless your subject is motionless, and your camera is as well. That means a tripod, and that means that the title of this post is a bait and switch; sure, you’ll be able to put your camera in your pocket, but that tripod won’t come close to fitting in there.
What I’m proposing is averaging several exposures. This is not a new idea, but it’s not commonly performed by ordinary photographers. It’s not something that you’ll do routinely, but it’s a great tool to have on your belt should you be faced with a situation where you want the absolute best image quality you can get, and you don’t have a camera with a big sensor with you.
To review the relationship between sensor size and SNR, here’s the rule:
SNR is proportional to the linear dimension of the sensor.
Another way of looking at it is that SNR is inversely proportional to the lens multiplier factor.
We know something about SNR and averaged exposures: it’s proportional to the square root of the number of exposures. You can make your four-thirds camera have the SNR of a full-frame one by averaging four exposures. It’ll have better than the SNR of a medium format camera if you average 16 exposures.
That’s the theory, anyway. I thought I’ve give it a try. I took a Sony RX-1, which arguably doesn’t need any SNR help, and aimed it at this scene, which is familiar to those of you who’ve been reading this blog a while.
I needed to simulate the shadow area in a scene with a reasonably high dynamic range. I set the camera to underexpose from the ETTR exposure by four stops. I focused manually. At ISO 100, I made sixteen exposures. You know, the RX-1 continues to surprise and delight me in little ways: how rare it is to have a camera that can use a standard cable release these days, and how convenient it is when it happens. Here’s what the scene looked like four stops underexposed:
Then I set out to create a set of averaged images. You can do this in Photoshop, but it’s kind of painful: Export the stack of images from Lightroom as layers, convert the stack to a smart object (Layer > Smart Object > Convert to Smart Object), and then Layer > Smart Object > Stack Mode > Mean. Frustrated with all the Photoshop steps, I wrote a little Matlab program to do it:
The Matlab program has an advantage over the Photoshop Smart Object approach: the averaging is done in double-precision floating point, and only converted to 16-bit unsigned integer representation at the end. Since the averaging is done in a linear (gamma = 1) RGB color space, 16 bits is not enough precision for the averaging calculation, or even to represent a single image. I don’t know what precision Photoshop uses for its Mean function.
I converted the images from raw to TIFF in Lightroom, with all noise reduction turned off. I created averaged images from 2, 4, 8, and 16 exposures with Matlab. I brought them into Photoshop and stacked them as layers with one of the original images, then I put an Exposure adjustment layer on top, and set it to +4.0 EV. I cropped the image very tightly. Here’s the resultant image, as a set of 2x JPEGs.
One image:
Two images averaged:
Four images averaged:
Eight images averaged:
16 images averaged:
The image is not very noisy to begin with. The two-image average is better, and the four image average is still better, but not by much. I can’t see any improvement in the 8-image average over the four, and hardly any in the 16-image average over the four. If I’d have started with a four-thirds image, the differences would have been more striking becasue there would have been more noise.
Is this any better than compositing a bunch of different exposures in some HDR program? Maybe not, but it has one theoretical advantage: there is no tone-mapping step, with its possibilities for creating unnatural results.
Sloan says
I think it might be possible to get by without the tripod by using the align tool in hugin
http://hugin.sourceforge.net/docs/manual/Align_image_stack.html
I’ve had reasonable luck in parralellizing it by keying all the subsequent images off of a random key image. Thus everything aligns to an image and you get a stack of aligned tiffs that you can push through your matlab script.