Back when I was in high school physics class, the teacher taught us to analyze the dimensions involved in any problem we were trying to solve before plugging in the numbers and grinding out the calculations. If the dimensions weren’t right, you’d set up the problem wrong, and no amount of fancy calculus (this was second year physics) was going to save you. In college, I sometimes employed dimensional analysis to figure out the right way to set up a problem that I didn’t totally understand, which has to count as a perverse use of the technique, even if it was surprisingly effective.
Often when photographers are discussing digital cameras, they say things like, “This 36 mega-pixel a68MkII has three times the resolution of that 12 MP D8765.” I think this is the wrong way to look at resolution, and that looking at resolution that way causes photographers to make bad decisions.
It all comes down to dimensional analysis.
Back in the film days, when we used the Air Force test targets, we measured resolution in line pairs per millimeter (lp/mm). In the digital era, we occasionally still use that metric, but more often talk about the spatial frequencies at which the modulation transfer function (MTF) falls below some value, say, 0.5. We call that metric MTF50, and its units are cycles per pixel width/height/spacing (cy/px), or cycles per picture height (cy/ph), or occasionally — and I don’t like this notation — line pairs per pixel or picture height or lines per pixel or picture height.
In all cases there is a dimensionless number divided by a distance. In dimensional analysis, you’d say that the units of resolution are “one over distance”, or 1/L.
That means that, all else equal, if you had a 12 megapixel sensor of given size, shape, and technology, to double the resolution you’d need 48 megapixels, not 24. And even then, your MTF50 measurements in cycles per pixel probably wouldn’t double because of the loss in image sharpness of the lens.
At this point, we should consider if we’ve been wrong all these years in measuring resolution with one over distance dimensions. After all, “one over distance squared” or 1/L^2 dimensions are used many places in science and engineering. An example is “areal density”, defined as the number of bits or bytes in a square centimeter or square inch on a spinning magnetic disk. This is useful because it gives a rough idea of the amount of storage available in similarly-sized drives. It is not a good measure of transfer speeds, which tend to go as the linear density, or number of bits or bytes per millimeter along a track.
One reason to think that one over length is the right dimensions for resolution is that all the human eye studies have been done in terms of cycles per visual angle, which translates directly to cycles per distance one the viewing distance is established.
Another is that all our tools for measuring and analyzing resolution — MTF, SQF — are based on linear measure.
Yet another is that photographers talk about lens magnification in terms of focal length, not focal length squared, which would be a natural metric if resolvable features per unit area were what’s important.
What’s the harm in discussing resolution in one over length squared terms? In my mind, the biggest reason is that it makes differences between sensor resolutions appear larger than they really are, and therefore can drive purchasing decisions in directions that are not in the best interest of the buyer.
The cynic in me thinks that the fact that making differences between sensors appear larger than they really are could be decidedly in the interests of the seller may be the most important reason why we talk so much in terms of pixel count instead. The camera manufacturers announce pixel counts, not picture height/length/diagonal in pixels. We photographers fasten on to the pixel counts and use them as the main way we talk about sensor resolution. Do that long enough, and it’s a small step to think that resolution is proportional to pixel count, not the square root of pixel count.
The next step is the worst one. That’s thinking that a 36 MP has half again as much resolution as a 24 MP one, and that therefore, I gotta have it, when, in reality, it only offers a 22% increase in resolution, and therefore, I can do fine without it.
Just today, Sony announced a 42 MP replacement for a 36 MP camera. That’s only an 8% increase in resolution; something you’ll be hard-pressed to see in real pictures. The camera offers many other improvements over the current one, all of which may be excellent reasons to buy the new one. However, the increased resolution is a lousy reason to buy the new camera.
Don’t get me wrong. I have in the past been a champion of small pixels. I continue in that camp. I want the pixel counts to go up. Way up. But, because of that pesky square root in the conversion of pixel count to resolution, the numbers need to change a lot to make much of a difference.
To those who say that total pixel count is the measure of resolution, I say, if you’re going to be that way, at least be consistent. When you buy binoculars, refer to the ones we call today 7×50 as “forty nine by fifty” binocs. When you talk about zoom ratios, think areas, not focal lengths; a zoom with a minimum focal length of 50mm and a maximum of 300mm should be called a “thirty six to one” zoom. Same thing when you compare two lenses of different focal length. When you look at areal magnification, a 280mm lens is twice as big as a 200mm lens.
If you don’t like looking at the world that way — as I don’t — then please stop saying a 24 MP camera has twice the resolution of a 12 MP camera.
Jean Pierre says
Hi Jim, well done. Ant it depend, which lens I put on. And, is there one lens, which can resolve 42MP or 50MP (Canon 5DSr)? …..
And, which raw-converter software are able to democaise correctly these RAW-files? …..
It is not done only with highresolution Sensor, if lens and software are not up-to-day!!!!
Jim says
Jean Pierre,
There are many, many lenses that can take advantage of 42 or 50 MP. In fact, there are many that could take advantage of 420 or 500 MP, at lest in the center.
In the subject of demosaicing files from high-res cameras, if the lens resoltuion stays constant, the higher the resolution of the camera the easier it is to demosaic the image. When the resolution of the sensor gets so high that the MTF of the lens is down in the noise while the MTF of the sensor is still high, then just about any demosaicing algorithm will work fine.
Jim
Rex Naden says
Excellent article Jim,
Rex
Jim says
Thanks, Rex. Best of luck on your astro workshop.
Diego says
This blog is pure gold, even if I don’t understand a lot of the technical things I read. It’s a pleasure to read. The only annoying thing is that sometimes when browsing through the dates on the calendar I get a white page that says “Your access to this site has been limited”, something related to your hosting provider i think. Anyway, great work Jim.
Diego. Italy.
Jim says
Diego, I have from time to time been the target of denial of service attacks, so I have instituted throttling and, in some cases, blocking for requests that come more often than seems reasonable for humans. Sorry they seem to have mistaken you for a machine. If you go slower, you’ll be OK.
Jim