There’s an article in the March/April 2012 issue of photo technique entitled “Mastering the Camera Histogram for Better Exposure”. The article contains some important misstatements. I’m not sure how they got by the magazine’s vetting process, but, if they gain currency by inheriting the stature of the magazine in which they are published, they may serve to confuse photographers going forward.
In this post, I will deal with one of the misstatements. The author of the article, David Wells, is discussing the pixel count of the sensors in today’s digital cameras. He says:
Each pixel is usually made up of one red, one blue and two green sensors, a so-called “Bayer array…”
If engineers ruled the world, image capture pixels might actually be counted that way. However, for what I believe to be mainly marketing reasons, image capture pixels are actually counted quite differently. I dealt with this issue in passing in an essay back when The Last Word was a column in the CPA newsletter, Focus. It’s available here.
Here’s a simplified explanation of the way that image capture pixels are counted: each light-sensitive element that contributes to the final picture, no matter what filtration is in front of it, counts as a pixel. For a fuller discussion see the end of this post. For all the details, look here.
So Wells is off by a factor of four. In cameras using a Bayer pattern, each element of the four-sensor pattern (one red, one blue, and two green) counts as a pixel; the whole four-sensor pattern is, by the logic of the camera manufacturer and user community, four pixels.
But wait, I hear some of you thinking, I’ve got a 16 megapixel camera, and there are 16 million RGB triplets in the files I get out of my raw converter. That is indeed true. However, I have bad news for you. Two-thirds of that data (half the green, and three-quarters of the red and blue) is generated by the raw conversion program, by interpolation or some other method. If interpolation is not a term that makes you say “Aha!”, a lay equivalent might be guessing (to be sure, scientific guessing, but guessing nonetheless). If image processing mathematics doesn’t scare you, for a survey of methods for artful production of missing data in raw conversion, look here. If you’re not an engineer, take a look at Mike Collette’s great explanation of how digital capture works; it’s here. Look at slides 8 through 12.
Here’s the more complicated explanation.
The Japan Camera Industry Association (JCIA) has written a standard for counting pixels. All the camera manufacturers that I know of follow this standard. It’s called Guideline for Noting Digital Camera Specifications in Catalogs. Among other things, it says that the camera manufacturers shall give top billing to the number of effective pixels , and that the number of effective pixels is
…The number of pixels on the image sensor which receive input light through the optical lens, and which are effectively reflected in the final output data of the still image…
It’s a little circular to define effective pixels in terms of pixels, but that goes back to the intent of the specification, which was to keep manufacturers from claiming even higher pixel counts than the standard allows.
Note that all this only applies to specifying the number of pixels in a camera. When it comes to pixel counts of images converted from raw form, a pixel in a color image consists of at least three numbers: RGB, Lab, CMYK, etc. Confusing, isn’t it?
Leave a Reply