While I was in New Zealand, we took a helicopter ride from Queenstown to Milford and back. I had an outside seat, and made some aerials of the beautiful mountains and valleys. I couldn’t use the M9 for two reasons. First, the buffer was too small. The second reason involved exposure. The only way to get an accurate exposure with the Leica is to take a picture, look at the histogram, adjust exposure, and make another picture. When scooting rapidly over scenery that’s partially covered by shifting clouds, there’s no time for this stratagem – you’d have moved away from the site of the first picture by the time you were ready to take the second.
Instead, I used the NEX-7, which has a feature that’s perfect for this kind of photography. You can tell the camera to put the histogram in the corner of the finder, so you can monitor your exposure as you’re taking pictures. I set the camera to shutter priority, set the shutter speed to 1/2000, set the ISO so I’d have an average f-stop of f/5.6, and used my thumb on the exposure compensation dial to keep the histogram pretty far to the right, but not so far that I lost detail in the highlights.
It worked like a champ:
Rather than making me properly grateful to the engineers at Sony, the experience gave me an acute longing for a camera exposure mode where the camera performed the expose to the right algorithm, rather than making me do it. The hardware is all there. It would be just a Small Matter of Programming (SMOP, to those with IT experience) to produce a firmware image that added this exposure method to those already in the camera.
If this type of exposure determination became common in cameras, getting a first cut at the right set of exposure controls in Lightroom could be automated. Think of the way that white balance works with raw files today. The camera doesn’t change the pixels based upon the white balance set by the user or the one that it automatically picks. Instead, it populates metadata fields with information for Lightroom or Camera Raw to use to adjust the white balance of the image. Once in LR, you can either accept the white balance in the metadata or override it with whatever suits your fancy.
Similarly, the camera could set the exposure to whatever kept the highlights just off the right edge of the histogram, and at the same time compute – and write into the metadata — the correction necessary to produce the tonal values in the image that would have resulted from a standard exposure algorithm. Lightroom could look at those values and apply them by default, so you wouldn’t see the washed-out image that you usually get with ETTR. If you didn’t like the default, you could override it just like you do with white balance today.
Why don’t cameras work this way? DSLRs don’t because, live view aside, they don’t know what the histogram will be until after the exposure is made: the mirror is in the way. Low end point and shoot cameras don’t because their users typically don’t shoot raw. However, I can’t think of any reason for high-end cameras with electronic viewfinders not to operate this way, except inertia.
Come on, Sony software writers! How about a beta version of the NEX-7 firmware with an ETTR mode? I’ll be a tester.
It’s 2020 and we don’t have ETTR yet AFAIK. Camera makers would rather go out of business lamenting smartphone competition than implement something this simple.