In the previous post, I looked at landscape images from the Fujifilm 110 mm f/2 on the GFX 100 and the Zeiss Otus 85 mm f/1.4 on the a7RIV. The crops were magnified to about 200% using Lightroom’s export feature. The Fuji images looked much better, even at f/11 on the Fuji lens and f/8 on the Zeiss one.
Note: these results are unequally affected by either
- Differences in the way that Lightroom sharpens raw files from the two cameras
- Anomalies in the Lightroom export resampling chain.
Please look at this post to see what happens with processing that avoids those two things.
What if we use a more sophisticated upsizing method than Lightroom export? If that’s interesting to you, read on.
I upsampled crops from the f/4 Sony/Zeiss file and the f/5.6 Fuji/Fuji file to the equivalent of about 20,000 pixels high using Topaz GigaPixel AI with the default settings. Here they are at 100% magnification:
If anything, the Sony image looks slightly sharper! That’s because both images were slightly undershapened to start out with, and GigaPixel AI seems to sharpen more the more you ask it to magnify. To me, the result is that there’s not much to choose between these images.
Using Epson Legacy Baryta, I also printed crops of the images on an Epson P800, whose dimensions corresponded to the following full-frame image heights:
- 15 inches
- 24 inches
- 30 inches
- 36 inches
When I inspected the prints from about 12 inches away, the 15 and 24 inch images looked the same, and the Sony was slightly sharper in the 30 and 36 inch images.
We saw in earlier tests of GigaPixel AI that it couldn’t create detail that hadn’t been captured, but that it does an excellent job of preserving detail. I am frankly surprised that it performed as well as it did with the Sony image in this test.
I’ve thought some more about the above results. The raw images received the same amount of sharpening on a per-pixel level in Lightroom: amount 20, radius 1, detail 0. If we’re trying to compensate for the light-sensitive area of the pixels, that’s appropriate. The pixel pitch of the two cameras is the same, the pixel design appears to be very similar, and so do the microlenses. But if we’re compensating for depth of field and for diffraction, the radius for the larger sensor should be about 1.4 times that of the smaller one. As it is, viewed in relation to the picture height, the Sony image is getting about 1.4 times the sharpening that we’re giving the Fuji one.
As I said above, GigaPixel AI seems to sharpen more the more it upsamples. That also would benefit the Sony image, since the two images were somewhat undersharpened. This is probably worth more experimentation using upsampling algorithms that are less of a black box than the Topaz software. Because the software is essentially inventing information, its efficacy will depend on the content of the image. As an example of that, as we saw with the Siemens Star target tests that I showed in an earlier post, at some point, GigaPixel AI — quite properly, in my opinion — gives up and stops trying to make up detail. So before drawing any general conclusions, we should look at disparate example crops.
Finally (at least finally for now), I earlier observed that I thought the biggest advantage of the GFX 100 over the GFX 50x was not increased sharpness, but decreased aliasing. There is no aliasing in the crops above that is immediately obvious to me, but I’m sure that there is aliasing there, and one of the functions of aliasing is to turn high spatial frequency hard-to-see small details into lower frequency easier-to-see — but wrong — details. That may be a partial reason for the Sony images surprising sharpness.