Resampling has a bad name among many photographers. Here’s a typical pronouncement, from here:
“I am not going to address resampling here because it degrades an image and has little application in fine art photography. (Resampling is when Photoshop adds pixels to an image.)”
Dodgy grammar aside, I strongly disagree.
On the artistic level, I think a photographer should be encouraged to do whatever she wants to get the desired effect. Any filter, any color or tone manipulation, any mangling, spindling, folding, mutilating, and processing of pixels is fine. Heck, if it gets you where you want to be, just make some pixels up. The result is the only thing that matters, and any path that leads to a good result is a good path.
But for the rest of this post, let’s discount artistic license, and assume that we’re talking about the truest possible representation of the original scene on a piece of paper coming out of an inkjet printer. Even in that highly restricted case (so constrained that some would argue that it’s no longer art), resampling is not only not always a bad thing, but, used right, nearly always necessary for the best quality.
There is a feeling among some photographers that the pixels that come out of the camera are somehow pure and real, and that we shouldn’t mess with them. We should remember where those pixels come from. Unless you’re using a Foveon sensor, every color pixel in the image was created, either in the camera or in the RAW conversion process, by combining signals from a monochrome pixel and some of its neighbors. Even under the most optimistic assumptions, half of the bits in the file bear no information not carried elsewhere in the file.
There’s a quip in advertising: “Half of our ad budget is wasted, but we can’t tell which half.” The photographic counterpart of that is: “Half of the data created during the demosaicing process is worthless, but we don’t know how to sort out the redundancies.”
If we upsize an image, we create more redundancy. Is that bad? I don’t think so. It’s going to create a file that’s bigger than it has to be, but, given demosaicing, I don’t see how you can say that the redundancy-filled file that comes out of the RAW converter is pristine, and adding more redundancy somehow sullies it.
In inkjet printing, you can’t just say “no” to resampling. When you print your image on an inkjet printer, it’s going to be resampled. The question is: to you want to be in control, or are you willing to let the whole thing happen on autopilot? The situation is similar to gamut mapping. If you have an image with colors the printer can’t print, gamut mapping is not an option; if you don’t handle it, the printer will do the job, and it will probably do it in a way that you don’t like.
Why does the image need to be resampled to be printed on an inkjet printer? Let’s say that the printer has 8 different ink colors, and can generate 3 different drop sizes. That means, at any one tiny spot on the paper, the printer can generate four to the eighth minus one, or 65,535 colors. Even an RGB image with eight bits per color plane can represent almost 17 million colors. It gets worse: many of the 65K colors that the printer can print are dark browns or muddy blacks, so there are far fewer useful colors. The printer does have one thing going for it, though; it can make these little color patches small and lay them down very close together – so close that they all run together to the eye.
The printer can’t print a lot of colors, but it can print them in great profusion. The way to get the printer to appear to print lots of colors is to print many tiny patches of several colors which the paper and the eye (using different combining algorithms, unfortunately) average to the desired color. The process of going from the many-colors-per-pixel representation of the image in the file to the few-colors-per-pixel representation at the print head is called halftoning – the name comes from printing press days when there were only two choices for the level of each ink available: ink or no ink. In the old days, the halftoning patterns were regular and photographically generated by exposing the artwork onto a high-contrast film through a transparency called a screen. Now, with computers, the patterns are generated by machine. The old regular patterns have given way to a technique called error diffusion, in which the halftone generator keeps track of how close it is to the color it’s trying to make at the moment and keeps trying to produce colors that get it closer. Error diffusion used alone can produce some ugly patterns, so randomness is added to break up the patterns. The result is sometimes referred to as diffusion dither, or error diffusion with blue noise. The specific algorithms used by the inkjet printer manufacturers are proprietary, but they are of this class. The biggest advantage of diffusion dither over regular patterns is esthetic: the artifacts, if they are visible, are more natural (a bit like film grain) and pleasing to the eye than the rosettes that you see with old-fashioned halftoning.
Consider an inkjet printer with 1440 dot per inch resolution in both the horizontal and vertical directions. At every pixel position, it’s going to need a new target color. It will look at that color and the accumulated error, do some random magic, and decide what color to put down on the paper. If the input image is 1440×1440 pixels per inch, then the printer will have to make no approximations to the target color.
That’s a lot of pixels, and you probably don’t feed your printer this diet. What if you send the printer a lower-resolution image? The printer driver and the printer will come up with a technique to use the big pixels you send the driver to get to the little pixels it needs to print. The simplest, and traditionally the most common, technique is called nearest neighbor. The printer/driver takes one of the big pixels as the target color and keeps printing little pixels with that as the target until some other big pixel is closer to the location of the current little pixel, at which point the printer/driver switches the target color to the new big pixel.
As we saw in the previous post, that’s not the optimum technique for output antialiasing. Bicubic interpolation is much better. You’ll get better results if you us the bicubic interpolation option in Photoshop and resize your image towards the actual printer resolution before you print. This is true in theory, and I have found it to be true in practice, but there are some caveats. If you have a reasonably large image, you will probably find that the printer driver chokes on a full-res version, and either crashes or goes to sleep.
I usually stop at either 360 or 720 pixels per inch. I pick the resolution by dividing the actual printer resolution by a power of two, so that, if the printer/driver uses nearest neighbor, the transition swill be made on even printer pixel boundaries.
Therefore, if you want the best print quality, resample your images before you hit control-P. This is normally one of the last steps in the image editing process, after you have decided the size of the printed image. This is also a good place to do unsharp masking (see the previous post). There’s no reason to save this expanded, sharpened file; why waste the space? Lightroom will do both the resampling and the sharpening for you, but the program is fairly opaque about the algorithms involved. I’ve had pretty good luck keeping the amount of sharpening low, but, if you want the best control, you’ll have to do the work in Photoshop.