• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / Technical / Scanning vs Stitching

Scanning vs Stitching

June 23, 2009 JimK Leave a Comment

I went to Monument Valley last weekend. It was just a quick trip, and with non-photographer friends along, I mostly just played tourist. I did take one camera, one lens, and no tripod. The camera was a 4000 by 6000 pixel 35 mm format digital. I amused myself by snapping off 6 to 12 picture panoramas.

When I got home, I fired up a stitching software package called Autopano Pro 2. I’d never used it before, and I was truly amazed at what it could do.

The first revelation was its ability to accurately automatically detect groups of photographs that needed stitching together. It doesn’t do this by analyzing the images; instead it looks at when the photographs were taken, the exposure information, and the focal length of the lens. When I took the pictures I didn’t know how the program worked, and it still did a pretty good job; now that I understand it I can easily do things (like changing the focal length of a zoom lens slightly between panoramas) that should make it essentially perfect.

The second surprise was the quality of the results. When I had done panoramas before I had always used a tripod, and usually I went to the trouble of adjusting the camera so that the pivot point was at the nodal point of the lens. Even so, the stitching software left artifacts that took a lot of manual cleaning up.  For distant scenes, with the camera handheld, Autopano did a flawless job. There were some problems if the foreground was too close to the camera, but it’s hard to blame that on the stitching software, since I didn’t use a tripod.

The combination of ease of use and quality of the results has given me a new perspective on stitching. I don’t think it’s just for panoramas any more. Consider the numbers: holding the camera that I used vertically, making three exposures, and assembling the result into a 35 mm shaped horizontal gives a 6000×9000 pixel image. If you’re hand holding it, you probably going to have to crop a little, so you’ll have maybe 5500×8250 pixels. Holding a camera the same way and doing two rows of three images each gives you (figuring in some overlap) a 9000×13500 pixel image. Both of those resolutions are seriously into scanning back territory.

Let’s take a moment to review scanning backs. They’ve been around since the early 1990s, a time when rectangular image sensors captured less than two megapixels. The idea was that, although it was incredibly expensive to make a high-resolution sensor that could capture an entire image at once, it wasn’t a big deal to build a line sensor with thousands of pixels. If you had a subject that wasn’t moving, you could use a motor to slowly move the sensor across the entire image. Several companies built scanning backs that slid into 4×5 cameras like film holders. It was a little like having the guts of a flatbed scanner in your camera. There was an umbilical that plugged into a box that you put under the tripod, and the box had to be connected to a computer that you took into the field. Capture times were measured in single-digit minutes. The results were spectacular. Quality exceeded what could be obtained with 4×5 film by quite a bit. The long exposure times limited the choice of subject matter, and the requirement to tote a computer along made field work slow and awkward.

Today, the standard resolution for a scanning back is 6000×8000 pixels. If you need higher resolution and have $23,000 lying around, you can get a 10200×13600 pixel back.

There are two big differences between scanning and stitching that make comparisons more complicated than just counting pixels.

The first favors scanning: each pixel in a scanned exposure is a combination of independent red, green, and blue sensing elements, as opposed to the pixels in an instant capture that are interpolated from the Bayer pattern in the sensor. I discussed that issue several years ago, and I figured that, to get the equivalent of real three-color pixel capture, you should divide the number of pixels in an instantly captured image by two, the equivalent of dividing each dimension of the image by 1.4.

The second favors stitching: view camera lenses are not built to give the kind of resolution demanded by high performance scanning backs. It’s axiomatic in lens design that as coverage increases, resolving power (measured in line pairs per millimeter) decreases. A bigger capture area and a bigger lens will get you more good pixels, but the increase will be less than proportional to size. When you use stitching to get high resolution, you can use a small lens with high resolving power; you get the increased resolution by using the lens over and over for each exposure, rather than the much more challenging procedure of trying to get a lens that can resolve the entire shot at once.

For studio use with no people in the shot and continuous lighting (no strobes), the scanning back still has a place. But in the field, where the bulk of all the equipment you have to carry with you (camera, back, tripod, cables, electronics box, computer, etc.) can really limit your mobility, increase your set-up time, and send you to the chiropractor, stitching together small images is getting increasingly attractive. As I found out in Monument Valley, sometimes you don’t even need a tripod.

In the past, my reaction to the difficulties associated with using a scanning back was to just forget about really high-res images. With the twin improvements in instant-capture sensor resolution and stitching software, I think I’ll change my mind.

However, I do have a problem with some of my Monument Valley pictures: wall space. There’s an image I like especially well. It’s composed of twelve verticals arranged horizontally and it’s 15000 by 6000 pixels.  At 360 pixels per inch, that’s 17×42 inches. It’s a nice picture, but it’s not good enough to turn that much wall over to it, and if I print it smaller, you won’t see all the detail.

Technical, The Last Word

← A new CPA website More Drobo troubles →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • JimK on How Sensor Noise Scales with Exposure Time
  • Štěpán Kaňa on Calculating reach for wildlife photography
  • Štěpán Kaňa on How Sensor Noise Scales with Exposure Time
  • JimK on Calculating reach for wildlife photography
  • Geofrey on Calculating reach for wildlife photography
  • JimK on Calculating reach for wildlife photography
  • Geofrey on Calculating reach for wildlife photography
  • Javier Sanchez on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • Mike MacDonald on Your photograph looks like a painting?
  • Mike MacDonald on Your photograph looks like a painting?

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.