A couple of days ago, a poster on the DPR MF board made a remarkable claim in response to speculation about a 100 MP X-series Hasselblad.
One word, diffraction. The X1DII and X1D doesn’t have, none or perhaps very little that I’ve noticed anyway.
As you might expect, I challenged that assertion:
Lenses suffer from diffraction. Sensors don’t, at least under normal photographic conditions.
He came back with:
Actually lens diffraction is affected by pixel size, not sensor size. I looked it up :-).
“Diffraction is related to pixel size, not sensor size. The smaller the pixels the sooner diffraction effects will be noticed. Smaller sensors tend to have smaller pixels is all. A 50mp sensor will be effected [sic] at larger apertures than a 10mp camera of the same sensor dimensions.”
There are things that are mixed up in that, and things that are just flat wrong.
The first conflation is the lack of distinction between the effects of diffraction and the visibility of those effects. The size of the Airy disk is not a function of sensor size, pixel pitch, or pixel aperture. The size of the Airy disk on the sensor is a function of wavelength and f-stop. That’s all. The size of the Airy disk on the print is a function of both those, plus the ratio of sensor size to print size.
The second confusion is between the size of the pixel aperture and the number of pixels on the sensor. Unless we’re talking about multishot regimes like pixel-shift, the largest effective pixel aperture we usually see is close to 100% of a square with sides equal to the pixel pitch. But we used to see apertures that were much smaller than that, and in some cameras, notably the Fujifilm GFX 50S and GFX 50 R, we still do.
There is also no clear distinction between the size of the diffraction disk on the sensor, the size of that disk as projected to the final print, and the size of the blur pattern created by the diffraction and the finite pixel aperture.
I’ve done a lot of work on diffraction before, and that work calculated the effects of the diffraction, pixel aperture, and defocus on blur circle size. Sometimes I used fairly rigorous simulations that took into account the phase of the light, and sometimes I used more approximate methods like the ones I’ll be using in this post. I’ll draw upon that work to tackle the issue of how pixel aperture and diffraction interact directly here.
I made some assumptions:
- 555 nanometer incoherent green light
- No lens aberrations
- Pixel aperture is circular, with 100% sensitivity within its diameter
- No Bayer color filter array. The way the effects of that array interact with all of the demosaicing algorithms is too complicated for me to model.
- I’m ignoring phase effects.
- Circular lens aperture
To approximate the size of the combined blur circle I to the square root of the sum of the squares of the pixel aperture and the diameter of a circle that included 70% of the energy of the diffraction disk. The more conventional approach would have been to take as the diameter of the diffraction disk the distance across the disk between the first zeros, but I have found in the past that that overstates the contribution of diffraction.
Here’s a plot for two pixel aperture diameters, 5.3 um and 3.76 um.
Excellent lenses often lose significant sharpness to diffraction on axis by f/4 or f/5.6. Most decent lenses are getting to be diffraction-limited by about f/11. On the left hand side of the graph, there is little diffraction and the combined blur circle is determined by the pixel aperture. By the time you get to the far right, the diameter of the combined blur circle is mostly the result of diffraction. I ran the calculations out to f/90, where the curves are almost right on top of each other, but I’m not showing it because the scale required makes it hard to see the differences at apertures that you’d be more likely to use.
- The format of the sensor — MFT, APS-C, full frame, 33×44 mm, or whatever you want — makes no difference.
- The focal length of the lens makes no difference.
- The subject distance makes no difference, except as it affects effective aperture
The pixel pitch makes no difference, either, at least explicitly. It is true that finer pitches usually go along with smaller pixel apertures, so inpractice they are related. However, the pixel apertures on both the Fujifilm GFX 50x and the GFX 100 are both about 3.76 um in diameter, even though the pixel pitch of the GFX 100 is 3.76 um and that of the GFX 50x is about 5.3 um. It is possible that the Hasselblad X1D has more conventional microlenses than the GFX 50x, and so the 5.3 um line might be more appropriate for the X1D.
The question of the visibility of the diffraction has two answers, depending on how you ask the question. If you ask the question “when does the diffraction dominate the sharpness of a well-focused subject”, then the size of the pixel aperture is important (not the pixel pitch: that affects aliasing). When a lens is stopped down to f/11, f/16/ or f/22 and focused accurately, it is likely that the lens aberrations are unimportant compared to the blur induced by the Airy disk. The size of the blur at the sensor is, neglecting phase effects, equal to the size of the convolution of the effective pixel aperture and the projected Airy disk. If the Airy disk is much larger than the pixel aperture, then virtually all the blur you see in the image will be due to the diffraction.
The graph above is a fairly good way to approach the above question. When the curves get close together, diffraction is dominating. When they are far apart, the pixel aperture blur is dominating.
The other way to phrase the question is “when does the diffraction begin to affect the sharpness of a well-focused subject”. If you phrase it that way, that happens when the size of the Airy disk becomes non-negligible compared to other blur sources. There are two main other blur sources to consider: the lens aberrations and the pixel aperture. The point at which the reduction in blur caused by stopping down the lens are reducing its aberrations stops producing sharper images because that same stopping down introduces more diffraction is dependent on the lens, and not on the pixel aperture.
The next graph is an attempt to deal with the second question.
What is plotted is the ratio of the combined blur circle diameter to that of the no-diffraction case less one. The horizontal line that crosses the y-axis at 0.5 means that diffraction has made the blur circle grow by 50%. That point occurs at f/11 for the sensor with pixel aperture diameter of 3.76, and at f/16 for the sensor with 5.3 um pixel aperture. So all else equal the effects of diffraction are more visible on the sensor with the smaller pixel aperture.
If the X1D has a pixel aperture of 5.3 um, you’re going to see pronounced diffraction effects at f/16. I can’t imagine that he X1D has a pixel aperture larger than 5.3 um, so that’s the best case for an assertion that you don’t have to worry about diffraction is your using an X1D. If the X1D pixel aperture is more like the GFX 50S and GFX 50R, then you’ll see the same relative amount of degradation at f/11.
But the bottom line for the X1D diffraction claim that led off this post is that increasing the resolution of the sensor with the same ratio of pitch to pixel aperture won’t make the effects of diffraction any worse in the capture at the same print size. In fact, it will make is better. Look at the top graph above. Consider the two curves as applying to two cameras with 5.3 and 3.76 um pitches, with pixel apertures equal to the pitch. Note that the combined blur circles are always smaller for the finer-pitch sensor, even though the differences are less significant at narrower apertures.