I haven’t posted anything on the sharpness testing for the past few days. It’s not because I’ve been busy with other things. On the contrary, I’ve been working way too hard on a problem I’ve discovered.
It all started when I looked critically at the results I was getting, and realized that, in some respects, they didn’t make sense. In particular, consider the aperture series in the last post. The results at the widest f-stops were too good to be true. I tried to convince myself that it was a focus shift issue, but something in the back of my mind kept telling me that probably wasn’t right.
I grew to suspect that my lighting compensation scheme wasn’t working. In particular, I suspected that I was overcompensating for light falloff, so that the wide aperture pictures, which suffered from vignetting, received unrealistic boosts in mean values and therefore in standard deviations. I decided to test it. I made two strobe-lit exposures under the same lighting conditions, deferring only in f-stop. One was f/11 and one was f/16. I figured there’d be some small diffraction differences, but that would be a second-order effect compared to what I was looking for. I brought both images into Lightroom and exported them as TIFFs. When I read them into Matlab and converted them to a linear representation, I noticed that the mean values weren’t a factor of two apart, as you’d expect. The ratio was 1.78. Not only that, the ratio of the standard deviations of the images as measured in Matlab, was 1.18. What was going on? Maybe the exposure was off, but the ratio of the means ought to be the same as the ratios of he standard deviations no matter what the difference in exposure was.
I redid the test, with the same results. Then I brought the f/11 image into Lightroom and dialed in -1 EV of Exposure adjustment. Matlab read those two images the same way as the two images that actually had two different exposures.
Well, then there had to be a programming error in my Matlab code, right? I went over it with a fine-tooth comb and tested it six ways from Sunday. It looked good, but I didn’t trust it anymore.
I fired up Rawdigger, and brought in the two images that were exposed a stop apart. The means were a factor of two apart, as were the standard deviations.
So the raw data looked as I would expect it to look, but the TIFFs from Lightroom looked wonky. I took the two raw files thru Iridient Developer. I got different answers than with Lightroom – a means ratio of 1.65, and a standard deviations ratio of 1.33 – but they were still wrong and still not anywhere near the same. I tried stripping all the processing I could find out of Iridient Developer. I even used the raw channel mixer to base all interpolations only on the two raw green channels (which prevents cross channel contamination). Every time I made a change, the means and standard deviations ratios would change, but they never got to anywhere near the right values.
I exported one of the green channels of both images from Rawdigger as TIFFs. When I read them into Matlab, the means and standard deviations were right.
Was something with the demosaicing screwing up the values? I didn’t think so, but I needed a raw converted that didn’t do any hidden processing.
I used dcraw to convert the two images, invoking it from the command line with the arcane test “dcraw -v -4 -w -j -T -o1 _D437350.NEF” looked at the sRGB TIFFs in Matlab. They were right.
I asked around on LuLa, but no one has explained what’s going on with Lightroom. Therefore, I’m now exporting the test images from Lightroom as renamed versions of the original raw files, and I’ve rewritten my Matlab code to call DCRAW for conversions.
All this makes a difference in the results. More soon.