• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / The Last Word / Computing 3D lookup tables for printing

Computing 3D lookup tables for printing

December 15, 2015 JimK 3 Comments

This is the seventh in a series of posts on color reproduction. The series starts here.

There’s a more flexible way of color correcting cameras than compromise matrices: the three-dimensional lookup table (3D LUT). With it, you can do essentially everything you can do with a compromise matrix, and many things you can’t. Its use comes with a few downsides, though, mainly greater use of memory and processing resources. In computer engineering terms, it’s a more expensive approach.

Let’s imagine a color lookup table for preprocessing RGB images that will be sent to a printer driver that will convert those images to CMYK, or CcMmYKk, or something even more complicated, depending on the printer’s inkset. We want a LUT that takes three scalar values (that’s the 3D part) specifying red, green, and blue in some colorimetric color space such as Adobe RGB, and produces three scalar values which, when sent to the printer driver, will result in the right color being printed.

Now, let’s say that our original image is specified to a precision of 16 bits per color plane. That means that there are 2^16 possible values for R, and the same number for G and B. That gives us a total number of (2^16)^3 possible colors. That’s a few hundred trillion colors; way more than it would be practical to store in a table.

How do we finesse the seemingly impractical required table size? We build a smaller table, and we interpolate input values that lie between those stored in the table. The number of entries in the table is pretty arbitrary, but 256x256x256 is a useful, if larger than normal, size. In such a table, we have a bit over 16 million entries, each of which is a RGB triplet of 6 bytes, for a total of about 100 MB. A 16x16x16 table would have about four thousand entries, for a total storage requirement of 24 KB. Those two are about the extremes of what’s practical. Sometimes 17x17x17 and 257x257x257 tables are more computationally convenient.

To use the table, we extract four to eight of the triplets that form the vertices of the cube that surrounds the input point, and perform linear interpolation among them. It’s also possible to extract more triplets and perform more complex interpolation. For those who want more details, I recommend this paper by a highly skilled, inventive, good looking color scientist who happens to be a really nice guy.

You’ll have to pay the SPIE to get the complete paper, and you may find that objectionable. In that case, here’s a PDF of a complete paper that has much of the same information.

How many LUT entries we need for a given level of accuracy is determined by how far from linear is the transform between the input and output color spaces. If it’s perfectly linear, we just need 8 RGB triplets in total, or a 2x2x2 table. Sometimes it saves space to have three one-dimensional LUTs between the overall input and the input of the 3D LUT, or between the output of the 3D LUT and the overall output.

There are many ways to populate the 3D LUT. I’ll describe a method that is appropriate for calibrating a printer about which very little is known.

First decide how many samples to print. As with the table that we’re going to end up with, the more samples the better the accuracy, but the more data there is to deal with, and the more trouble involved (in this case, with making measurements). 16 values for each of the primaries will yield a bit over four thousand patches, which is about the upper end of what most people are willing to deal with.

Put the driver in a mode where color management is turned off or at least neutralized so that it won’t change.

Print all the samples large enough so they can be measured with a spectrophotometer. Set R to the lowest value. Set G to the lowest value. Print all possible values for B. Increment G by one and repeat. When G gets to fullscale, set it to the lowest value and increment R.

Measure all the values, and convert the readings to whatever color space you want to be the input color space of the completed table.

Now we have a 3D LUT that goes from printer driver RGB to some standard color space. But that’s not what we want; we want a table that goes in the other direction.

So we have to “invert” the table. Decide how many entries we want in the final table. Set R of the standard to the lowest value. Set G to the lowest value. Set B to the lowest value. Search through the table we just generated, looking for the value in printer driver color space that will produce the color. Interpolate freely. Store the result in a new 3DLut. Increment B by one and repeat. When B gets to fullscale, set it to the lowest value and increment G. When G gets to fullscale, set it to the lowest value and increment R.

Now we have a table that tells us how to, given a color in our standard RGB color space, what to send the printer driver in order to get that color printed. However, there’s a big problem. The printer can’t print a whole bunch of the colors in the standard color space. We have to go thought the table and, for all the colors that the printer can’t print, put in the color we want it to print when presented with an out of gamut color. That’s called colorimetric rendering intent, and there’s a lot of art to it.

With colorimetric rendering, if the system we’ve come up with is fed a gradient that starts with printable colors and continues into unprintable ones, the colors will start out right, and, once they get to the edge of the printer’s gamut, will all map to colors on that gamut. This will tend to reduce differences (not eliminate, since the gradient can move around on the gamut) in out of gamut colors, which may not be what we want.

There is another rendering intent, called perceptual color, that allows squishing – that’s a technical term – the in-gamut colors so that there’s less chance of the out of gamut ones turning into a sea of sameness. To do that, we have to move many of the entries in the table we constructed.

Does this sound complicated? I hope so, because it is. It’s similar to using 3D LUTs to create camera profiles for non-Luther cameras. I’ll get to that next.

The Last Word

← Finding compromise matrices through simulation Metameric failure →

Comments

  1. Lynn Allan says

    December 15, 2015 at 10:28 pm

    Are you familiar with ArgyllCms and Graeme Gill?

    Your approach may be similar to what I’ve been doing on and off … 4608 patches generated by several of the free parts of X-Rite’s software suite. Use free ColorPort to drive an X-Rite i1iSis automated patch reader (definitely not free). Use several free ArgyllCms command line utilities to generate legally unrestricted printer profiles that can be freely distributed. Use another X-Rite free utility and a custom Adobe ExtendScript to get patch-by-patch, overall, and summary De2k’s from a test print with 1088 patches.

    Or not?

    Reply

Trackbacks

  1. Metameric failure | The Last Word says:
    December 16, 2015 at 9:29 am

    […] said in the last post that I’d get to 3D LUT based camera profiles in this one. I lied. Sorry, but the story of this […]

    Reply
  2. Strategies for 3D LUT camera profile generation | The Last Word says:
    December 18, 2015 at 11:16 am

    […] As we’ve seen, three-dimensional lookup tables are more flexible and powerful than compromise matrices. That lets us do things with 3D LUTs when calibrating cameras that can’t be done with compromise matrices. However, that freedom comes with a price; it’s possible to create profiles that don’t behave as the user might expect, even though they are optimum from the perspective of the design criteria. […]

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • bob lozano on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Goldilocks and the three flashes
  • DC Wedding Photographer on Goldilocks and the three flashes
  • Wedding Photographer in DC on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Fujifilm GFX 100S II precision
  • Renjie Zhu on Fujifilm GFX 100S II precision
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.