Back when I was in high school physics class, the teacher taught us to analyze the dimensions involved in any problem we were trying to solve before plugging in the numbers and grinding out the calculations. If the dimensions weren’t right, you’d set up the problem wrong, and no amount of fancy calculus (this was second year physics) was going to save you. In college, I sometimes employed dimensional analysis to figure out the right way to set up a problem that I didn’t totally understand, which has to count as a perverse use of the technique, even if it was surprisingly effective.
Often when photographers are discussing digital cameras, they say things like, “This 36 mega-pixel a68MkII has three times the resolution of that 12 MP D8765.” I think this is the wrong way to look at resolution, and that looking at resolution that way causes photographers to make bad decisions.
It all comes down to dimensional analysis.
Back in the film days, when we used the Air Force test targets, we measured resolution in line pairs per millimeter (lp/mm). In the digital era, we occasionally still use that metric, but more often talk about the spatial frequencies at which the modulation transfer function (MTF) falls below dome value, say, 0.5. We call that metric MTF50, and its units are cycles per pixel width/height/spacing (cy/px), or cycles per picture height (cy/ph), or occasionally — and I don’t like this notation — line pairs per pixel or picture height or lines per pixel or picture height.
In all cases there is a dimensionless number divided by a distance. In dimensional analysis, you’d say that the units of resolution are “one over distance”, or 1/L.
That means that, all else equal, if you had a 12 megapixel sensor of given size, shape, and technology, to double the resolution you’d need 48 megapixels, not 24. And even then, your MTF50 measurements in cycles per pixel probably wouldn’t double because of the loss in image sharpness of the lens.
At this point, we should consider if we’ve been wrong all these years in measuring resolution with one over distance dimensions. After all, “one over distance squared” or 1/L^2 dimensions are used many places in science and engineering. An example is “areal density”, defined as the number of bits or bytes in a square centimeter or square inch on a spinning magnetic disk. This is useful because it gives a rough idea of the amount of storage available in similarly-sized drives. It is not a good measure of transfer speeds, which tend to go as the linear density, or number of bits or bytes per millimeter along a track.
One reason to think that one over length is the right dimensions for resolution is that all the human eye studies have been done in terms of cycles per visual angle, which translates directly to cycles per distance one the viewing distance is established.
Another is that all our tools for measuring and analyzing resolution — MTF, SQF — are based on linear measure.
Yet another is that photographers talk about lens magnification in terms of focal length, not focal length squared, which would be a natural metric if resolvable features per unit area were what’s important.
What’s the harm in discussing resolution in one over length squared terms? In my mind, the biggest reason is that it makes differences between sensor resolutions appear larger than they really are, and therefore can drive purchasing decisions in directions that are not in the best interest of the buyer.
The cynic in me thinks that the fact that making differences between sensors appear larger than they really are could be decidedly in the interests of the seller may be the most important reason why we talk so much in terms of pixel count instead. The camera manufacturers announce pixel counts, not picture height/length/diagonal in pixels. We photographers fasten on to the pixel counts and use them as the main way we talk about sensor resolution. Do that long enough, and it’s a small step to think that resolution is proportional to pixel count, not the square root of pixel count.
The next step is the worst one. That’s thinking that a 36 MP has half again as much resolution as a 24 MP one, and that therefore, I gotta have it, when, in reality, it only offers a 22% increase in resolution, and therefore, I can do fine without it.
Just today, Sony announced a 42 MP replacement for a 36 MP camera. That’s only an 8% increase in resolution; something you’ll be hard-pressed to see in real pictures. The camera offers many other improvements over the current one, all of which may be excellent reasons to buy the new one. However, the increased resolution is a lousy reason to buy the new camera.
Don’t get me wrong. I have in the past been a champion of small pixels. I continue in that camp. I want the pixel counts to go up. Way up. But, because of that pesky square root in the conversion of pixel count to resolution, the numbers need to change a lot to make much of a difference.
To those who say that total pixel count is the measure of resolution, I say, if you’re going to be that way, at least be consistent. When you buy binoculars, refer to the ones we call today 7×50 as “forty nine by fifty” binocs. When you talk about zoom ratios, think areas, not focal lengths; a zoom with a minimum focal length of 50mm and a maximum of 300mm should be called a “thirty six to one” zoom. Same thing when you compare two lenses of different focal length. When you look at areal magnification, a 280mm lens is twice as big as a 200mm lens.
If you don’t like looking at the world that way — as I don’t — then please stop saying a 24 MP camera has twice the resolution of a 12 MP camera.