This is the fifth in a series of posts on color reproduction. The series starts here.

This is a nerdy and mathematical, though equation-free, take on how to create compromise matrices. If you don’t know what a compromise matrix is, start with the link in the paragraph above. If you just want a semi-technical view from 30,000 feet, look here.

In a perfect world, here’s how I would build a compromise matrix for a non-Luther digital camera.

I’d take a reflectance spectrophotometer – an instrument that I have never seen — and measure the reflectance vs wavelength for all the subjects of interest to me. I could also take a plain old spectrophotometer, set it to a broad illuminant with no peaks like D50, measure all the subjects, and divide the measurements wavelength by wavelength by the spectrum of the illuminant.

Then I’d measure the spectrum of all the illuminants that are important to me.

I’d do a wavelength by wavelength multiply of the reflectance spectra of the n subjects by the spectra of the m illuminants to get m*n spectra that I care about that will be imaged by the camera.

I’d fire up a special printer that can print samples with arbitrary spectral reflectance (a printer which I’m pretty sure doesn’t exist), and print little squares with each of the m*n spectra on a target substrate.

I’d evenly illuminate the target with a light source that has equal energy at each wavelength from 380 through 720 nm. Or, I could illuminate the target with some non-peaky source like D50 and compensate for its lack of spectral whiteness when I printed the target.

I’d photograph the target with the camera for which I wished to compute the compromise matrix.

I’d take the spectra of the samples and convert them each to Lab. I’d arrange the Lab values of each sample as a column vector, and construct a 3 by n*m matrix by concatenating all the samples in the horizontal direction.

I’d take each color plane of the mosaiced raw image, average the values in the vicinity of each sample, and further average the two green planes, giving me R, G, and B samples for each of the m*n squares. I’d arrange the RGB values of each sample as a column vector, and construct a 3 by n*m matrix by concatenating all the samples in the horizontal direction.

I’d probably use the camera manufacturer’s suggested white balance setting for each illuminant, normalize to G = 1 if necessary, and perform the corrections, giving me a 3 by n*m matrix of white balanced raw values. I’m not a fan of doing white balance correction in raw, but it seems to be a kind of standard way of doing things.

I’d start out with the test compromise matrix as the camera manufacturers recommended matrix in the EXIF metadata or in dcraw. Let’s assume the matrix is supplied as CIE 1931 XYZ to camera “color”, as it usually is. I put the “color” part of “camera color” in quotes because we have already said this is not a Luther camera, so it doesn’t see color the way that humans do. I’d invert that matrix to get from camera “color” to XYZ, and that would be my initial guess at a compromise matrix.

I’d multiply each column in the white balance corrected sample matrix by the compromise matrix to get to XYZ. Then I’d convert to CIELab, using the XYZ values of one of the neutral squares under D50 illuminant as a white reference.

I’m not sure what I’d do about the possible (probable?) occurrence of a situation where the D50 neutral squares a* and b* values turned out to be nonzero. A separate white balance step, XYZ to XYZ, before conversion to Lab? Do it for each illuminant?

In any event, now we have two 3 by n*m matrices in Lab. If absolute color error is the long pole in the tent, we subtract them to form an error matrix, then apply some function that weights it and yields all positive terms, then sum it to get a scalar error. But in general, luminance errors are more readily tolerated in photography than hue angle errors, and chroma errors may actually be desirable if they are the result of slightly more chromatic captures. So I might convert the Lab matrices to LCH, subtract them, weight them to taste, then sum get my scalar error function.

Then I’d put all of the last three paragraphs inside an optimum seeking method that’s tolerant of local minima, and let it run overnight, yielding an optimum compromise matrix.

A lot of the above is practical. What is not is creating a target that has the spectral responses of all subjects of interest under all illuminants of interest. Without that target, getting a compromise matrix that works well with many subjects is problematic.

What’s to be done?

- Compute compromise matrices for each illuminant, thereby simplifying the problem.
- Use a commercial target that has spectral reflectances that are deemed to be useful by many photographers. The hoary old Macbeth chart is one such target, but has the disadvantage that the number of patches is quite small.
- Print out targets with an inkjet printer. This is looking for your keys under the streetlight, but is extremely risky. Who says that the spectral reflectances of real world subjects are at all related to those of printer inksets?
- Find something that’s kinda, sorta close, and fix the colors to taste in post.

My guess is that most people do the last one.

By the way, in my research into this matter, I’ve found some useful references.

Here’s a primer on raw processing and color conversion. It’s written for Matlab users, but should be useful to others.

Here’s a step-by-step walkthrough of creating compromise matrices (not 3×3, but 4×3). There are several things that I disagree with here, but it’s still worth a look. A set of useful Matlab routines are in the appendix.

Here’s how the Imatest color correction module works.

Note that much the same processing that gets a compromise matrix for raw to some working space could also be used to go from one working space to another. The noise might be a bit worse in some places, but you could develop compromise matrices to be used pretty much anywhere in Lightroom or Photoshop.

Lynn Allan says

Rather esoteric.

IIRC, about a year ago, Ben Goren on the colorsync forum described calibration involving 1000+ patches, including a “black trap” and a nearly white teflon’ish material. I don’t know what came of that.

Al Sawyer says

This is a fascinating series. I am not afraid of the math, but the color vocabulary is outside my experience.

The correct URL is now https://rcsumner.net/raw_guide/RAWguide.pdf

JimK says

Thanks. Fixed now.