This is the eighteenth in a series of posts on color reproduction. The series starts here. You don’t have to read the whole thing if you don’t want to. I’ll try to make this post reasonably self-contained. If you get confused, reading the series from the beginning may be useful.
Warning. This post assumes a working knowledge of basic color science.
Here’s the problem: if you want to find out how accurately your camera, camera profile, and raw developer program capture color, one way to do it is to take a picture of the Macbeth color checker lit by a known illuminant, bring the image into your raw developer, write it out as a RGB color file, and examine the colors in that file.
If your chosen illuminant has the same white point as that of your output file, things are pretty simple. You choose that illuminant as the white point in your raw developer, it finds that it doesn’t have to do a white point reformation, and you should be good to go. That means that, if your output your file using Adobe RGB, or, perish the thought, sRGB, you need a D65 illuminant. You may have a hard time finding one of those, although, as a reader pointed out in a comment to the previous post, they do exist.
If your output color space is ProPhoto RGB, you need a D50 illuminant. Again, although uncommon at power levels appropriate for studio lighting, they are available.
But you’re probably not going to be actually using the camera to take photographs with either of those two illuminants. You’ll probably be using studio flash with a spectrum similar to D55, daylight with a spectrum similar to Illuminant C, tungsten lighting with a spectrum similar to a Planckian black body radiator, or a LED lighting with a spectrum that could be anywhere from D60 to Illuminant A, but probably not as smooth as any of those.
If you’re using an illuminant that’s not the same as that implied by the white point of your output color space, you’re going to have to arrange it so that your raw developer does the white point correction. Oh, I suppose you might be able to figure out how to turn off the part of the white point correction that’s aimed at dealing with different illuminants and do the WB correction yourself in post, but if you’re comfortable with that kind of workflow, then this post is too elementary for you.
As we’ve seen in the two previous posts (here and here), standard white point converters are error-prone, even with a (simulated) perfect Luther-Ives camera. So, what should be your goal for the 24 Macbeth patches:
Choice A: A perfect colorimetric rendering of the patch under an illuminant implied by the white point of your chosen color space?
Choice B: Color values derived from perfect colorimetric capture under the actual illuminant, as converted to the output color space white point by a specified WB conversion algorithm?
I’ll go on, but first let me mention, and then ignore, some details, because I know some of you will notice their absence otherwise. We need to specify what color scientists call an observer. I’m going to assume the CIE 1931 2-degree observer, although I know there are those who prefer the 1964 10-degree one, or possibly even others; any observer could be used constantly and not change the thrust of this post. I’m also going to ignore the fact that there are an infinite number of spectra which resolve, given any observer, to any given white point, and that illuminant metameric error is not compensated for by white point transformations. I figure that part of the point of controlled Macbeth CC testing is to determine the combined effects of capture metameric error, illuminant metameric error, camera profiles, and raw developer algorithms.
As I see it, the arguments for Choice A are:
- We usually don’t know the spectrum of the illuminant, which we need for choice B.
- We usually don’t know the white balance algorithm of the raw developer, without which we don’t know how to choose the one we assume.
- Photographers don’t care where the color errors come from, only that there’s an error. And the choice of white point conversion method is a legitimate point of discrimination among raw converters.
The arguments for Choice B are:
- Choice A lumps into the accuracy measurement two things that have nothing directly to do with the camera, the raw developer, the camera profile, or the output color space: illuminant metameric error and white balance conversion algorithm inaccuracies.
- Choice A fuses the choice of the test illuminant and the output color space white point. If we change output space to one with a different white point, we have to go back and do everything over in the choice A case.
- Choice A makes comparisons across illuminants without regard to camera choice impossible.
- Choice A makes comparisons across cameras without regard to illuminant choice impossible.
- The two bullets immediately above mean in practice that color accuracy testing is personal to the photographer, with little hope of reproducibility.
I’m not really comfortable with either choice. It seems that most people choose A. I don’t have any way around the first two objections to choice A. Camera profiles, if applied to a particular test illuminant and a particular output space, have the ability to partially correct for illuminant metameric error and white balance conversion algorithm inaccuracies.
It’s possible that there’s a third choice: Choice C . Use whatever illuminant is available, and use as your target the published Macbeth CC lab values under D50 illuminant, no matter what the white point of your target color space. Now you’ll get different results depending on which target space you pick, and the same results for PPRGB. The virtue of this approach is simplicity. The flaw is that the results are even more limited than with Choice A. There is also no option to illuminate the actual target with D65 and the reference with the same simulated illuminant.
You could also imagine a fourth choice, which wouldn’t white balance at all. Then the gray axis of the target would be entirely different from that of the output color space. I reject this because it does not represent the way that photographers use raw developers, although in some sense it is the intellectual high road if you’re mostly concerned with the errors of the camera and the profile.
As an aside, the original RGB values for the Macbeth chart are specified under Illuminant C, which, with the 1931 observer, resolves to x = 0.31006, y = 0.31616. This is close to, but not the same as the 1931 observer chromaticities for D65, which are x = 0.31271 and y = 0.32902. I know of no standard photographic color space that uses the Illuminant C white point.
The current X-Rite chart seems to be specified under D50 illuminant, but there is some question as to whether the sRGB values are just Bradford or von Kreis adaptations from the D50 data, or were remeasured under D65.
I guess it’s gotta be choice A, but I don’t have to like it. Even worse, it could be that X-Rite is promoting Choice C with their target data.