the last word

Photography meets digital computer technology. Photography wins -- most of the time.

  • site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge
You are here: Home / Technical / Resampling

Resampling

January 19, 2011 JimK Leave a Comment

Resampling has a bad name among many photographers. Here’s a typical pronouncement,  from here:

“I am not going to address resampling here because it degrades an image and has little application in fine art photography. (Resampling is when Photoshop adds pixels to an image.)”

Dodgy grammar aside, I strongly disagree.

On the artistic level, I think a photographer should be encouraged to do whatever she wants to get the desired effect.  Any filter, any color or tone manipulation, any mangling, spindling, folding, mutilating, and processing of pixels is fine. Heck, if it gets you where you want to be, just make some pixels up. The result is the only thing that matters, and any path that leads to a good result is a good path.

But for the rest of this post, let’s discount artistic license, and assume that we’re talking about the truest possible representation of the original scene on a piece of paper coming out of an inkjet printer. Even in that highly restricted case (so constrained that some would argue that it’s no longer art), resampling is not only not always a bad thing, but, used right, nearly always necessary for the best quality.

There is a feeling among some photographers that the pixels that come out of the camera are somehow pure and real, and that we shouldn’t mess with them. We should remember where those pixels come from. Unless you’re using a Foveon sensor, every color pixel in the image was created, either in the camera or in the RAW conversion process, by combining signals from a monochrome pixel and some of its neighbors. Even under the most optimistic assumptions, half of the bits in the file bear no information not carried elsewhere in the file.

There’s a quip in advertising: “Half of our ad budget is wasted, but we can’t tell which half.” The photographic counterpart of that is: “Half of the data created during the demosaicing process is worthless, but we don’t know how to sort out the redundancies.”

If we upsize an image, we create more redundancy. Is that bad? I don’t think so. It’s going to create a file that’s bigger than it has to be, but, given demosaicing, I don’t see how you can say that the redundancy-filled file that comes out of the RAW converter is pristine, and adding more redundancy somehow sullies it.

In inkjet printing, you can’t just say “no” to resampling.  When you print your image on an inkjet printer, it’s going to be resampled. The question is: to you want to be in control, or are you willing to let the whole thing happen on autopilot? The situation is similar to gamut mapping. If you have an image with colors the printer can’t print, gamut mapping is not an option; if you don’t handle it, the printer will do the job, and it will probably do it in a way that you don’t like.

Why does the image need to be resampled to be printed on an inkjet printer? Let’s say that the printer has 8 different ink colors, and can generate 3 different drop sizes. That means, at any one tiny spot on the paper, the printer can generate four to the eighth minus one, or 65,535 colors. Even an RGB image with eight bits per color plane can represent almost 17 million colors. It gets worse: many of the 65K colors that the printer can print are dark browns or muddy blacks, so there are far fewer useful colors. The printer does have one thing going for it, though; it can make these little color patches small and lay them down very close together – so close that they all run together to the eye.

The printer can’t print a lot of colors, but it can print them in great profusion. The way to get the printer to appear to print lots of colors is to print many tiny patches of several colors which the paper and the eye (using different combining algorithms, unfortunately) average to the desired color.  The process of going from the many-colors-per-pixel representation of the image in the file to the few-colors-per-pixel representation at the print head is called halftoning – the name comes from printing press days when there were only two choices for the level of each ink available: ink or no ink. In the old days, the halftoning patterns were regular and photographically generated by exposing the artwork onto a high-contrast film through a transparency called a screen. Now, with computers, the patterns are generated by machine. The old regular patterns have given way to a technique called error diffusion, in which the halftone generator keeps track of how close it is to the color it’s trying to make at the moment and keeps trying to produce colors that get it closer. Error diffusion used alone can produce some ugly patterns, so randomness is added to break up the patterns. The result is sometimes referred to as diffusion dither, or error diffusion with blue noise. The specific algorithms used by the inkjet printer manufacturers are proprietary, but they are of this class. The biggest advantage of diffusion dither over regular patterns is esthetic: the artifacts, if they are visible, are more natural (a bit like film grain) and pleasing to the eye than the rosettes that you see with old-fashioned halftoning.

Consider an inkjet printer with 1440 dot per inch resolution in both the horizontal and vertical directions. At every pixel position, it’s going to need a new target color. It will look at that color and the accumulated error, do some random magic, and decide what color to put down on the paper. If the input image is 1440×1440 pixels per inch, then the printer will have to make no approximations to the target color.

That’s a lot of pixels, and you probably don’t feed your printer this diet. What if you send the printer a lower-resolution image? The printer driver and the printer will come up with a technique to use the big pixels you send the driver to get to the little pixels it needs to print. The simplest, and traditionally the most common, technique is called nearest neighbor. The printer/driver takes one of the big pixels as the target color and keeps printing little pixels with that as the target until some other big pixel is closer to the location of the current little pixel, at which point the printer/driver switches the target color to the new big pixel.

As we saw in the previous post, that’s not the optimum technique for output antialiasing. Bicubic interpolation is much better. You’ll get better results if you us the bicubic interpolation option in Photoshop and resize your image towards the actual printer resolution before you print. This is true in theory, and I have found it to be true in practice, but there are some caveats. If you have a reasonably large image, you will probably find that the printer driver chokes on a full-res version, and either crashes or goes to sleep.

I usually stop at either 360 or 720 pixels per inch. I pick the resolution by dividing the actual printer resolution by a power of two, so that, if the printer/driver uses nearest neighbor, the transition swill be made on even printer pixel boundaries.

Therefore, if you want the best print quality, resample your images before you hit control-P. This is normally one of the last steps in the image editing process, after you have decided the size of the printed image. This is also a good place to do unsharp masking (see the previous post). There’s no reason to save this expanded, sharpened file; why waste the space? Lightroom will do both the resampling and the sharpening for you, but the program is fairly opaque about the algorithms involved. I’ve had pretty good luck keeping the amount of sharpening low, but, if you want the best control, you’ll have to do the work in Photoshop.

Technical

← Output antialiasing Resampling – the mailbag →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

March 2023
S M T W T F S
 1234
567891011
12131415161718
19202122232425
262728293031  
« Jan    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • Good 35-70 MF lens
  • How to…
    • Backing up photographic images
    • How to change email providers
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • JimK on Fujifilm GFX 100S pixel shift, visuals
  • Sarmed Mirza on Fujifilm GFX 100S pixel shift, visuals
  • lancej on Two ways to improve the Q2 handling
  • JimK on Sony 135 STF on GFX-50R, sharpness
  • K on Sony 135 STF on GFX-50R, sharpness
  • Mal Paso on Christmas tree light bokeh with the XCD 38V on the X2D
  • Sebastian on More on tilted adapters
  • JimK on On microlens size in the GFX 100 and GFX 50R/S
  • Kyle Krug on On microlens size in the GFX 100 and GFX 50R/S
  • JimK on Hasselblad X2D electronic shutter scan time

Archives

Copyright © 2023 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.