The a7RIV has 16-shot pixel shift technology. Fuji has said that they will be bringing that capability to the GFX 100 in a future firmware update. These events have rekindled the fires of discussion of that fifteen-or twenty-year-old scheme, specifically about how much the resolution of the system is increased. I am about to wax Clintonesque on the subject: it all depends on what the meaning of resolution is.
When I made a similar statement on a DPR board, I got a response that outwardly seemed reasonable, but didn’t survive close examination:
I’ll point you toward the dictionary for my definition of resolution.
Let’s give that a try:
Of the options, it seems only the fifth is relevant. But it has two apparently conflicting parts. The first one, that talks about the smallest interval, seems to refer to the object field. The second seems to refer to the image field. And it’s vague. It is also completely non quantitative. Could we use that definition to unequivocally say that one camera/lens systems has more resolution than another? No. Could we use it to measure the resolution of such a system? Again, no.
One measure of resolution in digital photography is simply the number of pixels per picture height or picture diagonal. It has the virtue of being simple to calculate, but in isolation doesn’t say anything precise about the amount of detail present in an image from a given camera. As an aside, the number of pixels in the sensor is not — at least in my book — a measure of resolution, but of something related.
The modulation transfer function of a system has the potential of being a precise, relevant measure of the system’s ability to record detail, but it is not a scalar.
Let’s look at the MTF curves for ideal (perfectly diffraction-limited) lenses on a monochromatic sensor with a 3.76 micrometer pixel pitch and a fill factor of 100%.
Now let’s look at the MTF of a 16-shot pixel shift image using the same sensor. This amounts to the MTF of a monochromatic sensor with a 1.88 um pixel pitch and a 400% fill factor.
Those look very similar. In fact, they are virtually identical, and with infinite precision, they would be exactly identical. So if we’re looking at MTF in cycles per millimeter as our definition of resolution, 16-shot pixel shift buys us no resolution at all. Plotting the above image in cycle per picture height at the same scale looks similar, since the sensor doesn’t change physical size because of pixel shifting:
But there are important differences between the single shot and the 16-shot images. Let’s look at the MTF curves in cycles per pixel. First, a single shot.
And now a 16-shot composite:
There is a good deal more aliasing in the single-shot image.
With a Bayer-CFA sensor, there are additional advantages to pixel-shift shooting, but they are hard to quantify simply.
- The reduced aliasing in 16-shot images will allow more sharpening to be used before aliasing artifacts become objectionable.
- If the demosaicing algorithm used for the single-shot image softens edges, that will not be a problem with the 16-shot image.
- False-color artifacts will be much less of an issue in 16-shot (or even 4-shot) pixel shift images.
For more on pixel shifting, as well as real camera examples, see here.