I got some perceptive comments about the resampling post, all from the same person.
Here’s the first:
For starters, two-thirds (not half) of the information in any color image from a Bayer-pattern sensor is fabricated. Half of the green and three-quarters each of the red and blue pixel values are made up.
Absolutely true. My use of half was being generous, and buying the logic of the designers of the Bayer array that you don’t need as much chroma information as you do luminance information. I have explained this elsewhere, but I didn’t do it this time. I did qualify the “half” part, saying, “Even under the most optimistic assumptions, half of the bits in the file bear no information not carried elsewhere in the file.” I did get sloppy with the “bits” reference, not wanting to go into a lot of detail about exactly what I meant by “information”.
And:
Foveon sensors still have to do some pretty serious deconvolution (math) to arrive at their “separate” red, green, and blue values for every color pixel. This could be thought of as color “resampling” — not quite the same as the spatial resampling that is implied elsewhere, but still a lot of tricks with numbers.
I wouldn’t call the math that the Foveon sensors need “resampling”, since there are the same number of pixels out as in, and the value of each output pixel is derived solely from one input pixel. I wouldn’t call it “deconvolution”, either. More like a three-by-three matrix multiplication — a color space conversion. Any digital sensor that I know of requires a color space conversion to get to Adobe RGB or ProPhotoRGB, the two most common working spaces among serious photographers, so I’m not too put off by that. I do acknowledger the reported color accuracy problems in the Foveon sensor, which may stem from the serial application of the color filters inherent in the silicon sensor stack.
Also:
Scanning backs (and certain multiple-exposure area backs) are the only digital capture devices that can deliver completely independent red, green, and blue image data at every pixel without any resampling of any kind [or color space conversion], but we all know how popular these are among photographers — even those who disdain “resampling”…
You are so right, modulo my objection to the word “resampling” for what the Foveon sensor does.
And lastly:
…the major misunderstanding presented…is equating printer “dots” with “pixels”, when these are not at all equivalent. A single printer “dot” is not much more useful than a single film “grain” — it’s the aggregate combination of many such dots (or grains) that carries the image information. Epson’s “1440 dots per inch” specification is not meant to imply a grid of “pixels”, but instead only tells us about the dot placement capability of the printer — the printer can lay down as many as 1440 individual ink dots per inch of media, but there may only be a fraction of this many dots per inch in many parts of a typical print. A higher dots-per-inch capability helps the printer produce crisper/smoother vector-based images, and allows the printer to use a smaller area to represent each pixel of a raster-based image.
This is a just a terminology issue: do we call these things “printer pixels” or “dots”. If we all agree on what we mean, it doesn’t really matter, and I can see the point. However, I stand by my position, and I’d like to explain it in a little more detail.
When the memory in graphics cards was really expensive, we used to have an eight-bit digital to analog converter (DAC) for each color, but we represented each pixel in the image on the CRT by an eight-bit quantity stored in the buffer memory. We could only represent 256 colors at any pixel location, but we only needed a third of the memory we’d need to store the 24-bit color value. When it came time to display a pixel, the graphics card accessed a lookup table (LUT) that gave us a 24-bit output, which the graphics card fed to the DACs. Thus, by changing the entries in the LUT, the display was capable of rendering 16.7 million colors, but only 256 at a time. When we wanted to render an image with 24 bits of color in it, we called upon software to do error diffusion dithering, so that we traded off spatial resolution for color accuracy.
Today, we have printers that are capable of laying down thousands of colors at any one point on the paper, and we trade off spatial resolution for color accuracy using diffusion dither.
We were perfectly happy to call the things the graphics card stored in its buffer “pixels” (or “pels” in some circles). Not full 24-bit-color pixels, but pixels just the same. Why not do the same with the things the printer puts down? The word dots made a kind of sense to me when there were only four inks and one drop size, but it doesn’t any more.
But call those printers ink patches “dots” if you like.
Leave a Reply