When I was working at Hewlett-Packard in the early 70s, I remember walking through optical labs and noticing the benches the engineers used to set up experiments. The most striking thing was the top: a huge slab of four or six-inch thick dark-gray granite pockmarked with holes on a regular grid. The bench had many qualities important to the experimenters — resistance to thermal expansion, long-term dimensional stability, rigidity, flatness, hardness – but one relates to photography: vibration control.
I thought of the optical bench when I was considering the problem of stabilizing a camera during the exposure. That line of thinking was, as those who have been following this blog know, triggered by my problems getting sharp images with the Sony a7R, and, to a lesser extent, with the Nikon D800E. The vibration problems solved by the optical bench and the tripod/head/QR clamp/QR plate are related, but far from identical. The lasers, mirrors, lenses, and instrumentation on an optical bench are not usually the predominant source of vibratory motion. A lot of the vibration that needs to be dealt with comes from the environment in which the bench finds itself. The main source of vibration a photographer in a windless environment has to deal with is within the camera. In fact, a common photographic assumption, which may or may not be true, is that whatever surface the tripod rests on does not move with respect to the subject.
You could say that the distances involved are far different. You’d be right, but maybe by not as much as you think. People working with optical benches think in terms of fractions of the wavelength of the light they’re using. Consider that the pixel pitch of the a7R or the D800E is 6 or 7 times the wavelength of red light. When Lloyd Chambers talks about 1/5 of a pixel movement, he talking about a little over one wavelength.
Regardless of the differences between the job of an optical bench and a camera support system, thinking about how an optical bench does its job might be useful background to any experimentation with camera vibration control.
I did a little web research, and found that things have changed a lot in forty years. Granite is no longer the preferred surface material, having been replaced by sandwiches with stainless steel “bread” and honeycomb “filling”. Think of an airplane skin, or a Hexcel ski. The tables are mounted on “legs” that absorb vibration. There is a lot of information on environmental vibration and table design on the web. I commend two papers to you: a reasonably-quick read by someone who works for a table manufacturer and a deeper dive that’s still comprehensible to people who aren’t mechanical engineers.
Here’s what’s similar between jobs of the optical bench and the camera support.
Resonance control. The table, and the camera support, should not introduce its own resonances.
Damping. The table, and the camera support, should damp any vibration introduced by the equipment mounted on it, and it should do that with the minimum motion. If he table, or the camera support, does resonate at frequencies excited by the mounted equipment, it should damp those resonances as well.
And here’s what’s different:
Isolation. The optical table needs to be isolated from its environment. If you think about it, this isolation can’t – and needn’t – take place at all frequencies. If the surface on which the optical table rests rises over a period of several days, there is no passive mechanism that can compensate for that, nor is there any need to. Extending this thinking to higher frequencies, the isolation of the table needs to exclude frequencies for which the table surface is not rigid.
This differentiation of the effect of various frequencies extends to the tripod-mounted camera in the following way. Think of the subject of the photograph and the tripod mounted camera as being mounter to the same base. Any vibration of the base should raise or lower the subject and the camera identically, and should not change the direction in which the camera is pointing with respect to the subject.
Some math makes for some sobering conclusions.
It’s not so bad if we only look at displacement. Assume that the speed of sound in the surface on which the camera and the subject rest is 2500 ft/sec. Assume the frequency of the vibration is 10 Hz. That means the wavelength is 250 ft, so if the camera and the subject are within 5 feet of each other, the phase angle between them is (5/250)*360 or 7.2 degrees. Thus the maximum differential displacement of the camera and subject is the peak amplitude of the vibration times the sin of 7.2 degrees. Thus the maximum displacement is 1/8 of the peak amplitude of the vibration. Typical building vibrational amplitudes run from 0.01 inches to 10 micro-inches. The vibration’s effect at the sensor is reduced by the inverse of the magnification ratio. If the image on the sensor is a tenth the size of the actual subject, then the maximum displacement on the sensor is from about 100 micro-inches to 100 nano-inches. Converting that to micrometers, we get 2.5 micrometers to 2.5 nanometers. Thus the worst case error is about half the sensor pitch on the D800E or a7R. If your shutter speed is not faster than a quarter of the vibration’s period, or 1/40 second with our 10 Hz example, you may see all the vibrational image shifting calculated above. If it’s slower than half the vibration’s period, or 1/20 second in our example, you’re likely to see twice those numbers.
However, the tripod legs don’t rest on the same point on the floor, and moving one of the legs up and the other down will cause the camera to point at a different place on the subject. Let’s assume the same 10Hz vibration as above. Assuming the camera stays pointed parallel to the floor underneath it, the worst-case upward or downward tilt in degrees is 0.0175 times the peak amplitude of the vibration divided by its wavelength times 2 times pi. This means that our 0.01 inch amplitude, 250-foot-wavelength — which is 3000 inches – vibration has a worst-case tilt of plus and minus 0.366 micro-degrees. With the subject 5 feet – 60 inches — away, this translates to plus and minus 0.0013 inches on the subject, and a tenth that, 130 micro-inches or 5 micrometers, on the sensor. Double this to get the peak to peak variation. This is more than two pixels on the two 36 megapixel cameras we’re discussing, and twice as much as the displacement error. As above, you can cut these numbers in half by using a shutter speed of 1/40 second. [I’d appreciate it if any interested people would check my math.]
Another way to come at this is to look at the vibration criteria that engineers have developed to figure out how little vibration you need to do certain kind of measurements. Here’s a good paper on the subject. Take a look at the table at the top of page three and note that it says that, for details of 3 micrometers, a little smaller than our sensor pitch, you need to keep vibration velocities to under 1000 micro-inches per second.
The take-home lesson here is that, now that out cameras have such great resolving power, there are fundamental vibrational limitations which may prevent our using all the capabilities they have. We may usually operate in environments that have well under a hundredth of an inch peak vibration. On the other hand, distant trains or subways, motor vehicle traffic, swaying buildings, etc. can generate even higher amplitudes than that.
Now that we’ve discussed the things we can’t control even with a 2-ton tripod and a milled-from-a-solid-block-of-diamond head, let’s move on what we can, and how they’re similar in an optical bench and a camera support. Those things are resonances and damping.
I think I’ll let that wait until tomorrow.
[…] series on the physics, engineering, and practical implications of vibration control starts here. There’s a simulation study of the a7R’s shutter shock […]