The new GFX 100 has on-sensor phase detection autofocus (OSPDAF, or, for the rest of this post, since we’re not talking about SLR’s, just PDAF). Bill Claff has already determined that the camera uses 18-row spacing. This link will show you to the evidence of that.
The camera is not yet shipping to the general public, but there are already questions and comments about how the PDAF will affect images. These stem from the PDAF-induced image degradation occasionally seen in the Sony a7x and a9 cameras, and the Nikon Z6 and Z7 ones. In the Sony cameras, the image degradation is called PDAF striping, and in the Nikon cameras, it’s referred to as PDAF banding.
The first fundamental misconception is that PDAF striping and PDAF banding are the same thing. They are not. PDAF striping is caused by reflections from the PDAF pixels that manifest themselves as regular bright stripes spaced at the same distance as the PDAF row spacing. The circumstances that cause those stripes require lens flare, but the stripes don’t appear in all cases where there is such flare. In fact, for most photographers, they’re not a problem at all.
PDAF banding is different. PDAF banding presents as dark stripes in some – but by no means all – scenes with bright elements and deep shadows. Usually, you must push the shadows to see the banding. PDAF banding is the results of Nikon’s (IMHO, misguided) attempt to mitigate PDAF striping. The cure is worse than the disease. Banding happens more often than striping, since the Nikon engineers couldn’t tell precisely where striping would be a problem and decided to err on the side of what they probably thought was caution.
The second misunderstanding is that PDAF striping and banding are unfixable, or at least require a lot of work in post to handle. That is most certainly not the case in either the Sony alpha series cameras or the Nikon Z ones. Raw Therapee has a tool that can eliminate either with minimal collateral damage.
Number three is that the effects are nothing to worry about unless you have no skill in selecting your exposure. I don’t know – yet – about the GFX 100, but in the case of the Sony and Nikon cameras we’ve been talking about, if the planets align themselves just wrong, you can run into problems with modest processing moves. One of the things that people talk up with medium format cameras is the increased dynamic range they afford. These effects, especially PDAF banding, can, unless mitigated in post, considerably reduce the practical dynamic range of the camera.
The fourth confusion is that, since the GFX 100 has OSPDAF, and at least some kinds of measurable image quality effects resulting therefrom, that all cameras based on the Sony IMX 461 must therefore suffer similar effects. It ain’t necessarily so. The PDAF pixels are created during the “topping” phase of sensor fabrication, at the same time as the color filter array (CFA) is applied. They are not a property of the underlying silicon. In order to take advantage of OSPDAF, the underlying silicon must be prepped for it, but if the topping has no OSPDAF pixels, but rather a plain old Bayer CFA, there won’t be any weird reflections and no striping, and thus no need for the software that causes the banding. So what’s true for the GFX 100 isn’t necessarily true for the X2D-100, should that camera ever hit the streets.
One last misunderstanding: that the PDAF pixels visibly soften the image. It is true that there is some interpolation across columns in the rows with PDAF pixels. But those rows are fairly widely separated (in the GFX 100, PDAF pixels occur in 18-row spacing), and most columns don’t have PDAF pixels. In addition, the PDAF pixels are usually not in the green channels, which carry most of the sharpness information. The blue channel carries the least, and many cameras with OSPDAF steal from the blue channels for that reason. I’ve certainly seen cameras where you could tell where the PDAF pixels were by looking at noise, but I’ve never seen one where sharpness loss gave away the game, even with demanding targets and lab conditions.
I’ll know a lot more when I have the camera in my hands, and when I do, you will too.
The photos in this post have nothing to do with the GFX 100, but I made them this morning using captures from my GFX 50s and considerable post-production.
So, just asking to see if I have this straight, Chris Dodkin on DPR posted a 4.35-stop pushed image from the GFX 100 which shows some horizontal pattern “noise” in the shadows. There are periodic darker lines which he says are at 18-pixel intervals. Are we to understand that this is “banding”, and results from Fuji’s attempt to mitigate “striping”?
So Fuji is going the Nikon route rather than the Sony route which tolerates striping?
Good question, but not one I’m going to be able to answer definitely until I get a camera in my hands.
I went through the pre-production RAW samples of GFX100 that are available in DPreview and other places.
In scenes that are shot ISO100 or near of it and that have strong highlights near saturation, like a white cloudy sky which through the sun blows, one can rather easily get visible banding by pulling the highlights back. Especially Lightroom/ACR tool “Dehaze” can bring this banding visible easily. It is in some cases possible to get with highlights, exposure, and contrast/clarity tools. Same is true for Capture One. Also, this came visible in other sky shots, where the sky was not near saturation, but was uniform in color.
In shadow areas, I have not been able to get any banding visible, no matter what I do. If the shot is significantly higher than ISO100, and the noise will mask out any banding.
After this, I tested some Sony A7r3 and Nikon Z7 files, too. I managed to get same behavior on Z7, but it is not as clear as Z7 generates more overall noise on highlights that are pulled back. On Sony A7r3 I was not able to reproduce this, but A7r3 was the noisiest on pulled back highlights. In fact, it seems that the reason for GFX100 to produce them more prominently is the less noisy pulled back highlights. Adding noise with the LR/ACR “grain” tool will mitigate the banding on GFX100, but sometimes the required amount of grain is too much and affects the overall image.
The bands were 18px apart as you say in this blog post.
So at least on the pre-production GFX100, there’s banding issue looming on the highlights and apparently Fujifilm/ACR/C1 has not implemented any mitigation at the highlights for it. But without heavy highlight pulling and added contrast, it does not get visible.
Roy Zan says
The GFX100 has huge photo sites so maybe OSPDAF won’t affect it.
But how about a super dense sensor such as the one on Olympus EM1 Mark III or M1X?
Those pixels are really small and they have on sensor PDAF. Wouldn’t On sensor PDAF affect the IQ of those cameras?
What are you talking about? The GFX 100 pixel pitch is 3.76 um. The Olympus EM1 Mark III pixel pitch is 3.32 um. Olympus M1x pixel pitch is 3.36 um.