Because CCDs and CMOS imaging chips are virtually linear devices within their useful range, and because the analog-to-digital converters that digitize their outputs are also linear, the signal-to-noise ratio is low for darker regions of the tone curve (the left side of the histogram). By exposing as far to the right as we can without clipping any highlights, we can achieve the higher signal-to-noise ratio possible in the final image. That means we will have the least possible amount of noise, visible artifacts, color errors, and banding.
Here are some details:
On a pixel-for-pixel basis, the sensor technology being equal, the signal to noise ratio (SNR) is proportional to the square root of the photosite area. This means the SNR is proportional to the pixel pitch.
It’s not really fair to compare noise in cameras with wildly varying resolution. Fortunately, we don’t have to. Consider two cameras using full frame sensors with the same technology, one 40 megapixels, and the other 10 megapixels. The sensor with more pixels will have half the SNR of the sensor with fewer, if the measurements are performed under the same conditions. If we res down the 40 megapixel to 10 megapixels, to a first approximation it will have the same SNR as the 10 megapixel camera. So, technology and output resolution held constant, the SNR of a camera is proportional to the linear dimensions (length, width, or diagonal – your choice, if the aspect ratio is the same) of the sensor.
So, if a full frame camera, measured under a standard set of conditions with its output file res’ed to a certain resolution has an SNR of x, an APS-C camera will have an SNR of 0.7x. A micro four-thirds camera will have an SNR of half of x. A Leica D-Lux 4 will have an SNR of one quarter x. An iPhone will have an SNR of one-eighth of x. Going the other way, a medium format camera will have an SNR of somewhat less than 2x, and a 4×5 scanning back will have an SNR of a little less than 4x.
Can we quantify the SNR effects of ETTR? Indeed we can. Let’s take an image that’s exposed perfectly to the right. No clipping or blown highlights, but information in the very top histogram bucket. Let’s pick a pixel group in that image, and measure its SNR. Let’s say it measures y. If we underexpose one stop from the perfect ETTR image, that pixel group will have an SNR of 0.7y. If we’re two stops under, the SNR is half of y. Four stops under, and it’s an quarter.
So, from a noise point of view, you can turn your full frame SLR into a micro four-thirds camera by underexposing by two stops, into a point-and-shoot by underexposing four stops, or into an iPhone by underexposing by six stops. Admittedly, I’m painting with a broad brush. The sensor technology in the iPhone is probably different from that in a D4 by more than just geometry. Still, it’s a useful way of looking at ETTR.
Next: Normal in-camera histograms
AS says
Isn’t there a ‘shoulder’ to the characteristic curve of the imaging pipeline? You state that the sensors are virtually linear within their useful range. In my mind, I define the ‘useful range’ as the linear part, and consider that the non-linear shoulder needs to be avoided to some extent. To get practical, the question of “how far to the right?” seems an open question, dependent on the characteristic curve, and requires another kind of experimentation. Shoulder compression of highlights into a decreased range of values has its own hazards, especially if one is interested in working creatively with highlight data. My own strategy is to ETTR while leaving a low and graceful tail in the part that I think is non-linear.
Jim says
I have not seen material shoulder compression on any camera I’ve tested, when examining the raw files. You can see it sometimes on raw sensors, but in my experience the camera designers don’t use that part of the photon transfer curve. You can, of course, get the shoulder back by pushing in Lightroom.
Jim
David Ritch says
The shoulder is a characteristic of film, not electronic sensors. In order to simulate the look of film, imaging pipelines introduce an S-shaped curve to the luminosity response. In Lightroom, the S-curve is already built into the standard profiles.
Stephen says
I’m wondering if, given the improvement in dynamic range and ISO performance of recent sensor design, if this recommendation is less useful, at least in certain circumstances.
In the case of the Sony a7rII, we have a 42.5 mp sensor with impressive SR and s/n performance. It is possible to recover shadows in post w/out significantly increasing grain/noise (vs Canon sensors). While exposing to the right always (I presume) increases s/n performance, I also want to make sure that pictures are sharp. Particularly with the highest resolution sensors makes sharpness more challenging, especially at longer focal lengths. Using higher shutter speeds improves sharpness more than exposing to the right improves dynamic range in high pixel density, highly sensitive sensors.
Tristan Chambers says
All semiconductors have a curve as you reach the saturation point. For me this means, as you get to the right, you get analog compression of the luminance values which results in jagged values when they are digitized. I’ve been experimenting with exposing TO THE LEFT with very desirable results. I find that though there is more noise in the shadows, the midtones have a buttery smoothness that I can’t get by exposing to the right. I shoot primarily black and white, so this is very important to me. Noise is not a problem in my book, especially since it has a dithering effect that fools our perception of quantization. You can read more about noise and quantization here:
https://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.html
Jim says
You are correct. Photodiodes saturate. However, in all of the testing I’ve done in the past two or three years, I’ve found that camera designers set the clipping point of the ADCs so that the region before clipping is quite linear.
Here’s the result of doing a photon transfer curve analysis of a Nikon D4:
http://blog.kasson.com/?p=8077
And here’s a Sony a7II, in 12-bit and 13-bit modes:
http://blog.kasson.com/?p=8586
Here’s the D810 in 12 and 14 bit modes:
http://blog.kasson.com/?p=8770
Note that you can’t take the average level of the input light to ADC fullscale, because photon, or shot, noise would then force the ADC to clip the upper tail of the Poisson distribution.
For a more psychovisual look at the effects of Bayer CFA dither than Emil’s excellent work, see these:
http://blog.kasson.com/?p=11857
http://blog.kasson.com/?p=11872
http://blog.kasson.com/?p=11872
Jim
DavidH says
Do you have any thoughts on this? Because based on that, it seems to me like the dynamic range is actually greater when underexposed on the A7, since you can pull a lot out of the shadows but not the highlights?
https://www.flickr.com/photos/birnenbaumgarten/10968320504/
StarEaterIII says
It is simple:
for Canonians: just ETTR;
for [So]Nikonians: ETTL!