I haven’t posted anything on the sharpness testing for the past few days. It’s not because I’ve been busy with other things. On the contrary, I’ve been working way too hard on a problem I’ve discovered.
It all started when I looked critically at the results I was getting, and realized that, in some respects, they didn’t make sense. In particular, consider the aperture series in the last post. The results at the widest f-stops were too good to be true. I tried to convince myself that it was a focus shift issue, but something in the back of my mind kept telling me that probably wasn’t right.
I grew to suspect that my lighting compensation scheme wasn’t working. In particular, I suspected that I was overcompensating for light falloff, so that the wide aperture pictures, which suffered from vignetting, received unrealistic boosts in mean values and therefore in standard deviations. I decided to test it. I made two strobe-lit exposures under the same lighting conditions, deferring only in f-stop. One was f/11 and one was f/16. I figured there’d be some small diffraction differences, but that would be a second-order effect compared to what I was looking for. I brought both images into Lightroom and exported them as TIFFs. When I read them into Matlab and converted them to a linear representation, I noticed that the mean values weren’t a factor of two apart, as you’d expect. The ratio was 1.78. Not only that, the ratio of the standard deviations of the images as measured in Matlab, was 1.18. What was going on? Maybe the exposure was off, but the ratio of the means ought to be the same as the ratios of he standard deviations no matter what the difference in exposure was.
I redid the test, with the same results. Then I brought the f/11 image into Lightroom and dialed in -1 EV of Exposure adjustment. Matlab read those two images the same way as the two images that actually had two different exposures.
Well, then there had to be a programming error in my Matlab code, right? I went over it with a fine-tooth comb and tested it six ways from Sunday. It looked good, but I didn’t trust it anymore.
I fired up Rawdigger, and brought in the two images that were exposed a stop apart. The means were a factor of two apart, as were the standard deviations.
So the raw data looked as I would expect it to look, but the TIFFs from Lightroom looked wonky. I took the two raw files thru Iridient Developer. I got different answers than with Lightroom – a means ratio of 1.65, and a standard deviations ratio of 1.33 – but they were still wrong and still not anywhere near the same. I tried stripping all the processing I could find out of Iridient Developer. I even used the raw channel mixer to base all interpolations only on the two raw green channels (which prevents cross channel contamination). Every time I made a change, the means and standard deviations ratios would change, but they never got to anywhere near the right values.
I exported one of the green channels of both images from Rawdigger as TIFFs. When I read them into Matlab, the means and standard deviations were right.
Was something with the demosaicing screwing up the values? I didn’t think so, but I needed a raw converted that didn’t do any hidden processing.
I used dcraw to convert the two images, invoking it from the command line with the arcane test “dcraw -v -4 -w -j -T -o1 _D437350.NEF” looked at the sRGB TIFFs in Matlab. They were right.
I asked around on LuLa, but no one has explained what’s going on with Lightroom. Therefore, I’m now exporting the test images from Lightroom as renamed versions of the original raw files, and I’ve rewritten my Matlab code to call DCRAW for conversions.
All this makes a difference in the results. More soon.
Iliah Borg says
What is going on is most probably default (hidden) S-shaped tone curve, which is not exactly gamma 2.2. If so, pairs of images taken a stop apart will render different ratio between them depending on the exposure of the first: say, 1/64 power to 1/32 should show the ratio different from 1/32 to 1/16.
Jim says
You’re right. Thanks to others, I now know how to get close to photogrammetric results from both Lightroom PV 2010 and Iridient developer. I will post the recipes soon.
Jim
Iliah Borg says
Frankly, I would stick to tools for such experiments, not to full-blown raw converters. Those converters tend to do something under the hood, and it takes time to figure out; also changes between versions may interfere.
But of course it is interesting to compare results from the tools to the results from the raw converters.
Jim says
Iliah, you’re right that there’s a lot going on under the hood of the commercial products that makes it hard to sort things out sometimes. The reason I was trying to use the common raw converters was to make it easier for others to duplicate my results. I realize that that’s somewhat misguided in this case, since the viewer would need some kind of programming skills to duplicate my Matlab work.
There was a benefit for me to using DCRAW; I’d never used it before, and now it’s part of my bag of tricks.
Thanks for your support on this and other projects,
jim
Iliah Borg says
IMHO interfacing Matlab to libraw and libgphoto may offer some additional convenience for this kind of experimentation. As you are already calling dcraw from Matlab, maybe first to add gphoto2 to control the cameras remotely.