In a Leica forum, a poster advanced the thesis that pushing in post increases contrast, and posted some images made with varying f-stops as evidence. I was worried about the effect of flare, which varies with f-stop in many lenses. And — you know me — I wanted something quantitative. I thought I’d do some testing. I took this target:
and pointed an M9 at it. I made an ETTR image at ISO 640. Then I made five more images, each one stop more underexposed than the previous one. I left the f-stop the same, and made the exposure changes by successively increasing the shutter speed. I started at 1/10 sec, so there wouldn’t be any double-exposure noise reduction taking place.
I brought all the images into Lightroom 5.2, white-balanced for Daylight, and applied an Exposure (PV2012) boost equal to the amount of underexposure. The ETTR image got no boost; the 1 stop under image got a 1-stop boost, the 2 stop under image got a 2-stop boost, the 3 stop under image got a 3-stop boost, the 4 stop under image got a 4-stop boost, and the 5 stop under image got a 5-stop boost ( I thought the max Exposure push in Lr was 4 stops — did they change that with Lr 5.2?).
I exported the images as layers into Photoshop. I set the eyedropper for a 101×101 pixel average, and I measured the CIELab values of the odd-numbered small patches in the Sekonic target. That gave me 12 patches, and 6 values per patch. In a perfect world, all the values for each patch would be the same. They weren’t, but they were close.
Here are luminance curves for all the patch values plotted against the average L* for that patch:
And here are the errors — defined here as the difference between the L* values for a given patch and the average value for that patch:
Except for the five-stop push, which seems to have somewhat higher values for the low tones than is correct, there are no systematic errors, and all errors appear low. Don’t forget that I can’t read anything but integer L* values in Photoshop.