This is the ninth in a series of posts on color reproduction. The series starts here.
As we’ve seen, three-dimensional lookup tables are more flexible and powerful than compromise matrices. That lets us do things with 3D LUTs when calibrating cameras that can’t be done with compromise matrices. However, that freedom comes with a price; it’s possible to create profiles that don’t behave as the user might expect, even though they are optimum from the perspective of the design criteria.
First, in the interests of full disclosure, let me say that I have no experience making LUT-based camera profiles. I have calculated compromise matrices. I have written software to make the LUTs for printer calibration and gamut mapping. I worked for six years as a color scientist. But I’ve never tried to write software to make LUT-based camera profiles. So why am I saying anything at all about them? I don’t want to complete this section and get on to evaluating how accurately cameras and raw developers capture colors without spending a bit of time on a technique that is sometimes used in, or in conjunction with, commercial raw developers. I welcome comments and corrections from those skilled in the art.
So, unencumbered with expertise, let me talk about how I’d attack this problem, and what some of the pitfalls are.
The first thing I’d do is create a compromise matrix based correction, using the camera simulation approach if I had access to the appropriate sensitivity spectra, and something cruder if not. Then I’d look at making a LUT that used the output space of the compromise matrix for both its input and its output space, and make the corrections there. After that was done, the two calculations could easily be combined into one 3D LUT that did both operations in a more computationally efficient manner.
My main approach to making color correction LUTs is to start with one that just produces at its output whatever it sees at its input (if it were linear, it would act like a matrix that’s zero except for a diagonal of ones). Then I move certain colors, and run a smoothing algorithm on the LUT to even out the table for the colors that aren’t “pinned”.
When making 3D color correction LUTs, like a physician, my motto is “first do no harm”. What kinds of harms are possible?
First, there’s posterization. Imagine that there are two camera captured triplets that ought to represent the same color. So we pin both of those input values to the same output value. Now everything in between is squished (another technical term) together.
Second, there’s the possibility of generating weird color shifts, the worst of which are usually hue shifts. If there are two colors that are close together in the input that ought to be far apart in the output, making the correction without a lot of care can result in the generation of gross nonlinearities that affect all input colors near the pair that we pinned.
For that reason, I wouldn’t try to build a 3D LUT correction table for more than one illuminant, even though I’d have a go at making a compromise matrix for more than one.
As I see it, the most difficult part of creating a 3D LUT correction table using a strategy like the above is deciding which input target patches are the most important, which are the least, and the best way to make the errors visually appealing.
Next – finally! – testing camera/developer color accuracy.
Leave a Reply