The vibration issues with the a7R’s shutter have caused me to spend more time thinking about how to test for shutter-induced vibration. The same techniques can be used to test for mirror-slap vibration in SLRs, but that’s not usually too much of an issue, for two reasons:
- mirror-slap vibration is larger, and therefore easier to see in test images
- every SLR worth its salt has at least one, and often several, ways to let the vibrations caused by the mirror rising to die down before the shutter opens
I started on the shutter slap testing when I was trying to get my firehouse pictures sharper. My first approach was to make photographs of a test target which I then, with the aid of a computer program that I wrote, analyzed for sharpness. This scheme enjoyed modest success, but had a problem that I was unable to solve: I couldn’t independently test sharpness that varied with shutter speed, and sharpness that varied with f-stop with fast shutter speeds. To perform such a test, I would need light source that could vary its output level over a broad range without changing spectrum. I used the modeling light on my studio flash units for some measurements, and they worked fairly well, but the color temperature fell as I dialed down the light output, and that made diffraction-influenced imaged blurrier, since red light is more easily diffracted than blue. In addition, the lights don’t get very bright.
Ideally, there should be a continuum of tests performed with the electronic flash duration determining the exposure time, and tests where the exposure time is only a function of the camera’s shutter. The test would be having the results be identical when the camera’s shutter speed is set to the flash duration. However, I’d need a very bright light continuous light source to do that? How bright? With 1000 watt-second flashes with durations of 1/500 second, I’d need a 500,000 watt light source. If I could find one, and if I could afford it, it would probably set my target on fire.
There was another problem with testing for shutter slap. Every shutter speed produced a new set of results, unrelated to the tests performed at other shutter speeds. You can look at the results of several such tests, and postulate a model for why you’re getting the results you’re getting, but it’s very indirect and unsatisfying.
That was what led me to the oscilloscope testing. This was an attempt to make an image that allowed me to determine, for any camera/lens/tripod/head/orientation (see how complicated it is even without introducing shutter speed?) the characteristic shapes of the forcing function and the nature of the resonance(s). If I had all that, I could calculate blur in sensels for any shutter speed.
The problem with the oscilloscope testing was resolution. Taking a picture of a point target inherently limits the resolution of the results to one sensel, although if the trace progresses across several sensels perpendicular to the time base, there is the possibility for interpolation. But it was worse than that. For long lenses, I couldn’t back far enough away to get the size of the ‘scope trace to one sensel or less. Getting the trace on the sensor smaller meant getting further away, which mean going outdoors, or bouncing the scope image around the room on mirrors.
I checked out first-surface mirrors. They’re not cheap, and, if you want them big, they’re darned expensive. The prospect of aligning the mirrors (a trickier job if they’re small) and worrying about vibration in the mirror holders also seemed daunting. I set that aside.
There are other indirect approaches that avoid the need to check each shutter speed separately. I could mount a laser in the accessory shoe, pass the beam through a spinning prism, and photograph the beam’s arrival on a screen. The main difficulties with this are
- There’s no reason to think the laser beam’s going to be much smaller than the scope trace.
- Engineering the prism drive system
- Converting motion on the screen to motion at the sensor.
- And – the worst one of all – not knowing how motion of the accessory shoe relates to motion of the sensor.
Another indirect approach is placing an accelerometer in the accessory shoe. The problems with that are:
- Figuring out how vibration relates to relative motion of the sensor and the image projected onto the sensor by the lens. This is especially tricky with long lenses, where rotation of the camera/lens about the mounting position is probably more significant than up-and-down or side-to-side movement about the mounting position. Front to back motion isn’t important at all.
- Again, not knowing how motion of the accessory shoe relates to motion of the sensor. This is more a problem with short lenses, since they have tighter coupling to the tripod/head assembly, higher resonant frequencies, and more damping. With really long lenses and lens collar tripod mounts, the coupling of the accelerometer to the accessory shoe will create far less motion than the pivoting of the lens/camera assembly around the place where it attaches to the ball head. However, we already know that the a7R will not perform well with a really long lens, so there’s not much point in putting that poor performance under a microscope.
- If the accelerometer has mass that is anywhere near that of the camera/lens combination, it will affect the camera motion.
Ferrell McCollough, a reader of this blog, has performed an insightful and interesting study of the vibrations of several cameras, including the a7R, by attaching an iPhone to the accessory shoe of the camera and running an app that provides three-axis accelerometer readouts using the device built into the phone. Although such studies are useful in identifying resonant frequencies and damping factors, and also getting a sense of the primary forces exerted by the cameras’ shutters versus time, the graphs can’t be connected to image displacement on the sensor for the reasons described above. Also, an iPhone has mass that is not small compared to an a7R and a short lens, so it will affect the results.
There is one thing that the accessory-shoe accelerometer studies can do that all the oscilloscope picture in the world can’t: show us the vibrations that occur before the shutter opens. While these vibrations can’t affect the image directly, they give rise to vibrations that do affect the image, and the amplitudes of the two should certainly be correlated, and probably proportional.
What I’m now doing with the ‘scope is figuring out a way to simultaneously, or with one setup and two shots, measure vertical and horizontal vibrations. My initial approach was to rotate the camera and make exposures with the time base operating in a horizontal direction. This doesn’t provide definitive results if the camera has a preferential direction of vibration, and I haven’t yet found camera that doesn’t. My current plan is to make photographs with the time base horizontal and vertical, or possibly tilted 45 degrees right and left from horizontal. I’m still struggle with the spot size issue.
It is likely that fractional-sensel vibrations cause notable image degradation. If the peak-to-peak initial amplitudes of the vibrations measured by the oscilloscope technique are less than one sensel, they will be very difficult to measure. If they’re less than two sensels, the resonant frequency will be difficult to measure. Maybe a useful adjunct to the testing that I’m doing exciting the camera/adapter/lens/tripod/head system with an impulse larger than the camera’s shutter or mirror can be expected to deliver. That way the frequency and damping factor can be calculated more accurately. I can’t immediately think of a device to deliver the forcing function that is sufficiently low mass and repeatable, but I’m sure there’s something.
Another avenue worthy of pursuit is to combine the ‘scope measurements with the accessory-shoe-mounted accelerometer measurements using an external stimulus that’s large enough to be analyzable with ‘scope images. Having done that, for any camera/lens/head/tripod combination, we’d have a correlation between sensor degradation and accelerometer readings. We could then see what the accelerometer reads with the camera itself providing the initiating impulses, and use the scaling factor that we derived in the externally excited case to convert the accelerometer readings to sensel blurring.
Another avenue for future work, which I will probably perform sooner rather than later because it’s relatively easy to do, is testing the sharpness of various lenses on my high-frequency sharpness target, using neutral density filters to modulate the transmission of the lens so that I can test various shutter speeds and have the f-stop, ISO setting, and post processing of the raw images the same throughout a shutter speed series.
If anyone has any ideas, please let me know.
Why not make things really difficult and suspend the camera with strings? Would take all the effects of the tripod on dampening out. I suspect that its having a much greater affect on hand held shots than people think. I have a samsung NX200 that has a shutter that feels like a mini catapult. On a tripod I can see clear shutter shock only from 1s to 1/60s with a 180mm lens but I can rarely get a clear shot handheld until about 1/400s. With my NEX 6 and EFC I get clear shots as low as 160s.
Ferrell McCollough says
Great work you have in progress. I started creating a base image that has no shutter vibration. One could then compare a normal exposure with shutter vibration. I made the “no shutter vibration” image in a dark room, open the shutter 10 seconds and at the 7 second mark fire the off camera strobes. It’s at the same link you have above.