• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / The Last Word / Strategies for 3D LUT camera profile generation

Strategies for 3D LUT camera profile generation

December 18, 2015 JimK Leave a Comment

This is the ninth in a series of posts on color reproduction. The series starts here.

As we’ve seen, three-dimensional lookup tables are more flexible and powerful than compromise matrices. That lets us do things with 3D LUTs when calibrating cameras that can’t be done with compromise matrices. However, that freedom comes with a price; it’s possible to create profiles that don’t behave as the user might expect, even though they are optimum from the perspective of the design criteria.

First, in the interests of full disclosure, let me say that I have no experience making LUT-based camera profiles. I have calculated compromise matrices. I have written software to make the LUTs for printer calibration and gamut mapping. I worked for six years as a color scientist. But I’ve never tried to write software to make LUT-based camera profiles. So why am I saying anything at all about them? I don’t want to complete this section and get on to evaluating how accurately cameras and raw developers capture colors without spending a bit of time on a technique that is sometimes used in, or in conjunction with, commercial raw developers. I welcome comments and corrections from those skilled in the art.

So, unencumbered with expertise, let me talk about how I’d attack this problem, and what some of the pitfalls are.

The first thing I’d do is create a compromise matrix based correction, using the camera simulation approach if I had access to the appropriate sensitivity spectra, and something cruder if not. Then I’d look at making a LUT that used the output space of the compromise matrix for both its input and its output space, and make the corrections there. After that was done, the two calculations could easily be combined into one 3D LUT that did both operations in a more computationally efficient manner.

My main approach to making color correction LUTs is to start with one that just produces at its output whatever it sees at its input (if it were linear, it would act like a matrix that’s zero except for a diagonal of ones). Then I move certain colors, and run a smoothing algorithm on the LUT to even out the table for the colors that aren’t “pinned”.

When making 3D color correction LUTs, like a physician, my motto is “first do no harm”. What kinds of harms are possible?

First, there’s posterization. Imagine that there are two camera captured triplets that ought to represent the same color. So we pin both of those input values to the same output value. Now everything in between is squished (another technical term) together.

Second, there’s the possibility of generating weird color shifts, the worst of which are usually hue shifts. If there are two colors that are close together in the input that ought to be far apart in the output, making the correction without a lot of care can result in the generation of gross nonlinearities that affect all input colors near the pair that we pinned.

For that reason, I wouldn’t try to build a 3D LUT correction table for more than one illuminant, even though I’d have a go at making a compromise matrix for more than one.

As I see it, the most difficult part of creating a 3D LUT correction table using a strategy like the above is deciding which input target patches are the most important, which are the least, and the best way to make the errors visually appealing.

Next – finally! – testing camera/developer color accuracy.

The Last Word

← Metameric failure A naïve approach to camera/developer color accuracy →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • JimK on How Sensor Noise Scales with Exposure Time
  • Štěpán Kaňa on Calculating reach for wildlife photography
  • Štěpán Kaňa on How Sensor Noise Scales with Exposure Time
  • JimK on Calculating reach for wildlife photography
  • Geofrey on Calculating reach for wildlife photography
  • JimK on Calculating reach for wildlife photography
  • Geofrey on Calculating reach for wildlife photography
  • Javier Sanchez on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • Mike MacDonald on Your photograph looks like a painting?
  • Mike MacDonald on Your photograph looks like a painting?

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.