This is the 23rd post in a series of tests of the Fujifilm GFX 100, Mark II. You can find all the posts in this series by going to the Categories pane in the right hand panel and clicking on “GFX 100 II”. It’s also of more general use, so I’ve tagged it with “The Last Word” as well.
A decade or more ago, it was common to see cameras subtracting the black point before writing the raw data to the flash card. I never like it. The first reason has nothing to do with normal photography: it made the cameras harder to test. The second reason is philosophical: I believe that no operation should be done in-camera that can be done at least as well in postproduction. I have many reasons for thinking this way. More computer resources are available in post. As better algorithms are developed, you can go back to your old raw files and invoke them, possibly getting better results. You can use algorithms that are interactive, tweaking parameters for best results with individual files and specific intents. For all those reasons, I was happy when the camera manufacturers stopped subtracting the black point in the camera.
When I write image processing programs, I almost always use floating point, and carry along negative values until the last possible moment, when I switch to unsigned integer precision to write the output file. That gives me the maximum flexibility, but I’m not writing code that places a high priority on either a small computing footprint or fast processing speed. I can see why some demosaicing algorithms might work better if they had visibility to the whole image, including both sides of the black point, but I don’t know enough about the demosaicing algorithms used in Lr and C1 to say whether they work better if the black point is not subtracted in camera.
When the values below the black point are lopped off the way the GFX 100 II does at ISO 80, it has the effect of raising the mean values near black higher than they should be. I don’t know if that has any bad effects on real world photography.
And then there’s the situation where the camera doing the black point subtraction gets it wrong. The Leica M240 screwed up the black point subtraction, causing the shadows to go green. There was no postproduction fix for that until someone wrote a program to apply a digital bandaid.
I’ve never seen a camera do what the GFX 100 II does, which is subtract (part of) the black point for one ISO setting, and not perform the subtraction for the other ISO settings. I struggle to imagine what was going through the Fujifilm engineers’ heads when they decided to do that.
Now that we know more about what the GFX 100 II does at ISO 80, should we use that ISO setting? I think so, but if you start to see shadow color shifts with heroic lifting, it’s probably a good idea to switch to ISO 100 and use the calibration tools in Lightroom and Adobe Camera Raw.
Dan Kennedy says
What was going through the Fujifilm engineers heads was similar to what was going through Volkswagen’s engineers heads when they scammed the diesel emissions tests.
They realised most of the DR testing online tests recoverability of the shadows, so they pulled the ISO100 file down in exposure to put more detail into the shadows, and clipped the black point to fool the tests for photographic dynamic range.
This is all so they can claim “30% higher dynamic range” and lie about having a new sensor to sell cameras.
I honestly think the market regulators should put Fujifilm to task and get them to prove it’s a new sensor and in my opinion they will be found out and have to give partial refunds to all the consumers they’ve lied to.