While working with the firehouse pictures, I’ve come to the conclusion that some of them – maybe even most of them – aren’t as sharp as I’d like them to be, even with flat subjects. I’m not sure what the problem is, or even that there’s only one problem, but my current suspects are:
- Mirror slap
- Shutter vibration
- Focusing accuracy
I don’t think that overall lens quality is the issue, since I’m seeing the effects with two lenses of good reputation (I haven’t tested either). Both are macro lenses, and are being used away from the ends of their focusing range, so that’s probably not the problem.
I’ve come to the conclusion that I need to do some testing. I need to create a target, make images of it under controlled circumstances that are similar to those I find in the firehouse, and analyze the results. There are enough variables here that I’m worried that a simple visual analysis, while necessary to tie the target images back to the firehouse ones, won’t be sufficient. I want numbers. With numbers, I’ll be able to sort out statistical variations, which affect all the variables except diffraction. I’ll also be able to find the places where the differences really matter.
Here’s a method that might work.
- Create a target image, to be printed out at 13.33×20 inches (4800×7200 pixels at 360 ppi, or 9600×14400 pixels at 720 ppi), which matches the aspect ratio of the full-frame cameras I’m using on this project, and is representative of the size of the subject area that I’m including in the firehouse pictures. I’m thinking that the target should probably be binary, that is, have only pure black and pure white values.
- Photograph the target, making changes in shutter speed, live view use, mirror lockup, lens choice, aperture, focusing method, etc.
- Bring the images into Lightroom and give them the same basic raw development I’m giving the actual photographs.
- Export them from Lightroom as (probably monochromatic) TIFFs. Bring the images into Matlab, subtract out low frequency data that could be caused by lens coverage issues or uneven lighting, and compute the statistics of what’s left.
I would expect the standard deviation to be the most useful measure of unsharpness. To the degree that the image is perfectly sharp, all the pixels will have one of two values. Unsharpness caused by any of the above causes should result in pixels assuming intermediate values, thus lowering the standard deviation.
Sound like a plan? It does to me.
Now, what should the test image look like? One possibility is a simple checkerboard. There are a couple of problems with that. First, the pattern will beat with the Bayer array in the camera, producing false color and moiré effects. Those may or not adversely affect the statistics, so that’s not a fatal flaw. Also, we need to remember that any target capable of telling whether the camera is producing the sharpest image possible will have high enough spatial frequencies to show Bayer sampling error and aliasing, at least in those cameras without optical low-pass filters. The big problem with a checkerboard is that it won’t work well over a large range of sharpness. If the resolution of the camera in the test configuration is well below the pitch of the checkerboard, the standard deviation will be very low, and small changes in sharpness won’t result in appreciable differences in the numbers. Conversely, if the resolution of the camera in the test configuration is well above the pitch of the checkerboard, the standard deviation will be very high, and small changes in sharpness won’t result in appreciable differences in the numbers.
Maybe a target that has checkerboard patches at various pitches averaged? That would be better, but the results might be sensitive to alignment. That could be dealt with by making sure that the entire target was in the frame when the test images are created.
A stochastic target is another possibility. That should reduce visible moiré. Whether it would affect the measurements is less clear.
I think I’ll try the multi-pitch checkerboard and see what happens.