A reader — thanks, Chris! — made a comment yesterday that I think raises some points worthy of attention.
So what needs to change in the camera, presuming the sharpness is in the OTUS , to get it out?
Physics says f4 and that’s not changing soon, unless we hear from the Large Hadron Collider.
So setting aside atmospherics, again we have no control, we can give support pretty well leaving nailing focus and handling in camera generated vibration?
Without support in real world shooting we need stabilisation in camera.
Is there any reason, other than marketing, to go to 50MP in a “35mm” camera without fixing those two/three?
I believe there is.
I’ve done simulation studies that show that it will take on the order of half a billion Bayer-CFA’d pixels to get most of what’s in the Otus in the capture. That assumes that the center sharpness is available across the whole frame, which isn’t a good assumption, but the way that camera sensors are currently constructed, which constant pitch, means that that knowledge wouldn’t help anyway.
More pixels will make it easier to focus with live view, too. The images that I’m focusing on with the D810 are so badly aliased that it’s difficult to know what is a sharper image than another. If the lens weren’t so far ahead of the sensor in its MTF curve, that wouldn’t be the case.
And, more pixels will produce smoother images with less aliasing, even if they’re not sharper.
So, I’m all for more pixels, even if we don’t get a lot more sharpness.
Chris goes on to say:
And finally, a question, presumably with say a Phase One IQ280 the f stop is slightly bigger, 5.2 pitch against 4.88, but it’s really close, so why, having spent that amount, don’t we get complaints about final sharpness and focus issues, or is the repeatability at the level you are measuring just as bad but no one looks?
There’s a big difference between what you can measure and what you can see. In my experience, there is little practical difference between the center sharpness of the Otus 55 and the Zony 55, even though I can repeatably measure differences between the two lenses. To give you an idea of what I’m talking about, let me show you what a sharp slanted edge looks like, and let you compare it to a not-so-sharp one.
Note: these edges were demosaiced without interpolation, using a technique that Jack Hogan suggested to me: white balancing (or, as I prefer, equalizing) the four raw layers. It’s a technique I’ve used in the past on deep-IR images, and it works on the Imatest target that I’m using because it is monochromatic.
Here’s the plot of a reasonably sharp edge:
And here’s one that’s quite a bit worse:
Here’s the better edge, blown up a lot:
And here’s the not-so-good one:
Without the slanted edge calculations, I’d have a hard time saying that one was meaningfully sharper than the other, although you can see it in the focusing targets. Here’s the better one:
And the not-so-good one:
Bet you’ve not seen aliasing like that from a Bayer CFA without any false color. That’s the beauty of the white balance demosaicing technique. Too bad it only works for monochromatic subjects.
Also, let me point out here that, absent the false color, the images presented above faithfully replicate what I saw when I was focusing using the D810’s magnified live view. There’s nothing wrong with the D810 live view per se, it’s just that, without peaking, it’s harder to use precisely than that on the Sony a7R.
So, to get back to why IQ180 users aren’t complaining about inability to achieve critical focus, my guess is that they’re getting focus that’s accurate enough for sharp-appearing images, which is a lower bar than the focus necessary to get sharp-measuring images.
But to back away from this a bit, the advent of high-res sensors and marvelous new lenses like the Otus 55 and the exotic German tech camera lenses used by landscapers on the IQ180 have given photographers such exquisite tools that a lot of our focus infrastructure is falling apart.
Except for wide lenses, rangefinders can’t provide sufficient accuracy. Even ground-glass phase detection autofocus won’t cut it if you want things consistently really sharp. Same with contrast-detection AF, but improved algorithms may help that in the future; the mismatch between the ground glass distance and the sensor distance is always going to be a potential problem in SLR cameras.
Depth of field tables are obsolete, too, as the acceptable circle of confusion gets smaller and smaller.
Lens tilts get more problematical as finding a subject plane that’s sufficiently flat gets harder as the definition of “sufficiently” changes.
What to do? I’ll have some ideas n the next post.