• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / The Last Word / Testing for shutter slap

Testing for shutter slap

December 21, 2013 JimK 2 Comments

The vibration issues with the a7R’s shutter have caused me to spend more time thinking about how to test for shutter-induced vibration. The same techniques can be used to test for mirror-slap vibration in SLRs, but that’s not usually too much of an issue, for two reasons:

  • mirror-slap vibration is larger, and therefore easier to see in test images
  • every SLR worth its salt has at least one, and often several, ways to let the vibrations caused by the mirror rising to die down before the shutter opens

I started on the shutter slap testing when I was trying to get my firehouse pictures sharper. My first approach was to make photographs of a test target which I then, with the aid of a computer program that I wrote, analyzed for sharpness. This scheme enjoyed modest success, but had a problem that I was unable to solve: I couldn’t independently test sharpness that varied with shutter speed, and sharpness that varied with f-stop with fast shutter speeds. To perform such a test, I would need light source that could vary its output level over a broad range without changing spectrum. I used the modeling light on my studio flash units for some measurements, and they worked fairly well, but the color temperature fell as I dialed down the light output, and that made diffraction-influenced imaged blurrier, since red light is more easily diffracted than blue. In addition, the lights don’t get very bright.

Ideally, there should be a continuum of tests performed with the electronic flash duration determining the exposure time, and tests where the exposure time is only a function of the camera’s shutter. The test would be having the results be identical when the camera’s shutter speed is set to the flash duration. However, I’d need a very bright light continuous light source to do that? How bright? With 1000 watt-second flashes with durations of 1/500 second, I’d need a 500,000 watt light source. If I could find one, and if I could afford it, it would probably set my target on fire.

There was another problem with testing for shutter slap. Every shutter speed produced a new set of results, unrelated to the tests performed at other shutter speeds. You can look at the results of several such tests, and postulate a model for why you’re getting the results you’re getting, but it’s very indirect and unsatisfying.

That was what led me to the oscilloscope testing. This was an attempt to make an image that allowed me to determine, for any camera/lens/tripod/head/orientation (see how complicated it is even without introducing shutter speed?) the characteristic shapes of the forcing function and the nature of the resonance(s). If I had all that, I could calculate blur in sensels for any shutter speed.

The problem with the oscilloscope testing was resolution. Taking a picture of a point target inherently limits the resolution of the results to one sensel, although if the trace progresses across several sensels perpendicular to the time base, there is the possibility for interpolation. But it was worse than that. For long lenses, I couldn’t back far enough away to get the size of the ‘scope trace to one sensel or less. Getting the trace on the sensor smaller meant getting further away, which mean going outdoors, or bouncing the scope image around the room on mirrors.

I checked out first-surface mirrors. They’re not cheap, and, if you want them big, they’re darned expensive. The prospect of aligning the mirrors (a trickier job if they’re small) and worrying about vibration in the mirror holders also seemed daunting. I set that aside.

There are other indirect approaches that avoid the need to check each shutter speed separately. I could mount a laser in the accessory shoe, pass the beam through a spinning prism, and photograph the beam’s arrival on a screen. The main difficulties with this are

  • There’s no reason to think the laser beam’s going to be much smaller than the scope trace.
  • Engineering the prism drive system
  • Converting motion on the screen to motion at the sensor.
  • And – the worst one of all – not knowing how motion of the accessory shoe relates to motion of the sensor.

Another indirect approach is placing an accelerometer in the accessory shoe. The problems with that are:

  • Figuring out how vibration relates to relative motion of the sensor and the image projected onto the sensor by the lens. This is especially tricky with long lenses, where rotation of the camera/lens about the mounting position is probably more significant than up-and-down or side-to-side movement about the mounting position. Front to back motion isn’t important at all.
  • Again, not knowing how motion of the accessory shoe relates to motion of the sensor. This is more a problem with short lenses, since they have tighter coupling to the tripod/head assembly, higher resonant frequencies, and more damping. With really long lenses and lens collar tripod mounts, the coupling of the accelerometer to the accessory shoe will create far less motion than the pivoting of the lens/camera assembly around the place where it attaches to the ball head. However, we already know that the a7R will not perform well with a really long lens, so there’s not much point in putting that poor performance under a microscope.
  • If the accelerometer has mass that is anywhere near that of the camera/lens combination, it will affect the camera motion.

Ferrell McCollough, a reader of this blog, has performed an insightful and interesting study of the vibrations of several cameras, including the a7R, by attaching an iPhone to the accessory shoe of the camera and running an app that provides three-axis accelerometer readouts using the device built into the phone. Although such studies are useful in identifying resonant frequencies and damping factors, and also getting a sense of the primary forces exerted by the cameras’ shutters versus time, the graphs can’t be connected to image displacement on the sensor for the reasons described above. Also, an iPhone has mass that is not small compared to an a7R and a short lens, so it will affect the results.

There is one thing that the accessory-shoe accelerometer studies can do that all the oscilloscope picture in the world can’t: show us the vibrations that occur before the shutter opens. While these vibrations can’t affect the image directly, they give rise to vibrations that do affect the image, and the amplitudes of the two should certainly be correlated, and probably proportional.

What I’m now doing with the ‘scope is figuring out a way to simultaneously, or with one setup and two shots, measure vertical and horizontal vibrations. My initial approach was to rotate the camera and make exposures with the time base operating in a horizontal direction. This doesn’t provide definitive results if the camera has a preferential direction of vibration, and I haven’t yet found camera that doesn’t. My current plan is to make photographs with the time base horizontal and vertical, or possibly tilted 45 degrees right and left from horizontal. I’m still struggle with the spot size issue.

It is likely that fractional-sensel vibrations cause notable image degradation. If the peak-to-peak initial amplitudes of the vibrations measured by the oscilloscope technique are less than one sensel, they will be very difficult to measure. If they’re less than two sensels, the resonant frequency will be difficult to measure. Maybe a useful adjunct to the testing that I’m doing exciting the camera/adapter/lens/tripod/head system with an impulse larger than the camera’s shutter or mirror can be expected to deliver. That way the frequency and damping factor can be calculated more accurately. I can’t immediately think of a device to deliver the forcing function that is sufficiently low mass and repeatable, but I’m sure there’s something.

Another avenue worthy of pursuit is to combine the ‘scope measurements with the accessory-shoe-mounted accelerometer measurements using an external stimulus that’s large enough to be analyzable with ‘scope images. Having done that, for any camera/lens/head/tripod combination, we’d have a correlation between sensor degradation and accelerometer readings. We could then see what the accelerometer reads with the camera itself providing the initiating impulses, and use the scaling factor that we derived in the externally excited case to convert the accelerometer readings to sensel blurring.

Another avenue for future work, which I will probably perform sooner rather than later because it’s relatively easy to do, is testing the sharpness of various lenses on my high-frequency sharpness target, using neutral density filters to modulate the transmission of the lens so that I can test various shutter speeds and have the f-stop, ISO setting, and post processing of the raw images the same throughout a shutter speed series.

If anyone has any ideas, please let me know.

 

The Last Word

← Leica WATE on the Sony a7R — crops Leica 90mm & Nikon 85mm on the a7R →

Comments

  1. Andrew says

    December 25, 2013 at 6:32 am

    Why not make things really difficult and suspend the camera with strings? Would take all the effects of the tripod on dampening out. I suspect that its having a much greater affect on hand held shots than people think. I have a samsung NX200 that has a shutter that feels like a mini catapult. On a tripod I can see clear shutter shock only from 1s to 1/60s with a 180mm lens but I can rarely get a clear shot handheld until about 1/400s. With my NEX 6 and EFC I get clear shots as low as 160s.

    Reply
  2. Ferrell McCollough says

    December 27, 2013 at 2:16 pm

    Great work you have in progress. I started creating a base image that has no shutter vibration. One could then compare a normal exposure with shutter vibration. I made the “no shutter vibration” image in a dark room, open the shutter 10 seconds and at the 7 second mark fire the off camera strobes. It’s at the same link you have above.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • bob lozano on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Goldilocks and the three flashes
  • DC Wedding Photographer on Goldilocks and the three flashes
  • Wedding Photographer in DC on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Fujifilm GFX 100S II precision
  • Renjie Zhu on Fujifilm GFX 100S II precision
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.