A few people have questioned why I’m going to all this trouble to find out how to approximate the real raw histogram. Why not just accept the extra safety margin that the camera gives with a normal JPEG preview image?
I dealt with some of that here. Here’s a salient excerpt:
…if a full frame camera, measured under a standard set of conditions with its output file res’ed to a certain resolution has an SNR [signal-to-noise ratio] of x, an APS-C camera will have an SNR of 0.7x. A micro four-thirds camera will have an SNR of half of x. A Leica D-Lux 4 will have an SNR of one quarter x. An iPhone will have an SNR of one-eighth of x. Going the other way, a medium format camera will have an SNR of somewhat less than 2x, and a 4×5 scanning back will have an SNR of a little less than 4x.
Can we quantify the SNR effects of ETTR? Indeed we can. Let’s take an image that’s exposed perfectly to the right. No clipping or blown highlights, but information in the very top histogram bucket. Let’s pick a pixel group in that image, and measure its SNR. Let’s say it measures y. If we underexpose one stop from the perfect ETTR image, that pixel group will have an SNR of half of y. If we’re two stops under, the SNR is a quarter of y. Three stops under, and it’s an eighth.
So, from a noise point of view, you can turn your full frame SLR into an iPhone by underexposing by three stops.
Think of it another way. Why pay for and lug around a camera with a big sensor if you’re not going to get the results that it can deliver?