I’ve had some email on the antialiasing posts. One person quotes me from Antialiasing, part 2 as follows:
The good news is that increasing the area of the sensor receptors reduces aliasing, and does it fairly efficiently. William Pratt, in his book Digital Image Processing, 2nd Edition, on pages 110 and 111, compares a square receptor with a diffraction-limited ideal lens and finds that, for the same amount of aliasing error, the lens provides greater resolution loss. He asserts, but does not provide data, that a defocused ideal lens would perform even more poorly than the diffraction-limited lens. In digital cameras, this kind of antialiasing filtering, which comes for free, is called fill-factor filtering, since it is related to how much of the grid allocated to the sensor is sensitive to light.
And then comments:
Increasing the sensor area also does something that those of us in the film world (at least me) had a hard time getting our heads around. It reduces depth of field.
I now regret the shorthand I used in the first sentence of the first quotation. I thought it was clear in context, but now I see that I needed to be more explicit. Let me try again, and even add a little explanation. What I should have said was:
…increasing the proportion of the sensor photosite area that is light-sensitive reduces aliasing, and does it fairly efficiently.
An example: Let’s talk about an array with a pixel pitch of 10 micrometers. Let’s say that an approximation of an ideal photo receptor would have the smallest quasi-practical area, or 700 nanometers (the wavelength of red light) on a side. In this example, one half of one percent of the pixel’s footprint would be light sensitive (it would have a fill factor of 0.49%), and the resultant sensor would be a reasonable approximation of an ideal sampler, but would we useful only in the studio under very bright lights, as it would be slow, noisy in dim light, and have low dynamic range. Now let’s make the light-sensitive area of the photoreceptor the entire 10 micrometer-sided square (it would have a fill factor of 100%). The resultant sensor would not be ideal, since it would roll off high-frequency image information, but it would have fewer aliasing problems, and it should be fast (faster than a Nikon D3), and have a reasonably good dynamic range.
The reader’s comment that larger sensor area reduces depth of field is true in the context of overall sensor size — the size of the whole chip. Bigger sensors, for the same angle of view, require lenses with longer focal length, and those lenses have shallower depth of field. This is true in the digital world, just as it was true in the film world. The larger the format, the shallower the DOF.
Leave a Reply