Yesterday I posted about automatic ETTR and how it could improve the lives of photographers making raw files with electronic-viewfinder cameras. I said that it wouldn’t be appropriate for point-and-shoot cameras not making raw files. Upon consideration, I realize that I was wrong. The technique could enhance the quality of JPEGs from all cameras. You could argue that the ones with the smallest sensors need the most help. See here for an explanation.
Say the camera’s auto exposure system computed the exposure based on ETTR. It could also calculate the exposure using one of its standard algorithms, and set that aside. It could make the exposure with the ETTR-derived aperture, ISO, and shutter speed. Then it could use the difference between that exposure and the conventional exposure to process the JPEG image so that the middle grays are where they should be. The result would be lower noise with all but the highest-contrast subjects than just making the exposure the conventional way.
The process would be completely transparent to the user, except that she’d get a higher-quality image.
Why don’t cameras work this way? Maybe some of them do; without looking at the metadata carefully, we’d never know.
Bryn Forbes says
I think some cameras do this in their non standard modes. I recently had the opportunity to play with Jack Davis’ Olympus TG-1 camera that has a magic mode called “drama” that is essentially a real time (2-4 fps) tonemapping mode (you see it on the view screen). We tried comparing a 5d Mark III raw file of humpback whales under the water on auto vs. the JPEG coming out of the TG-1. Even with contrast and clarity at 200 we weren’t getting similar modes. While I’m sure a lot of the difference comes from the tonemapping effect, I think they are optimizing the exposure in order to be able to maximize the effect in software.