• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / The Last Word / Constructing a compromise matrix

Constructing a compromise matrix

December 9, 2015 JimK 1 Comment

This is the fourth in a series of posts on color reproduction. The series starts here.

This is going to get pretty technical, so I’d like to first give the “See Spot run” version in this post, and get into the details in the next one. If you’re not into the techie stuff, you can just read this one.

We saw in earlier posts in this series that consumer cameras don’t have the right set of Color Filter Array (CFA) spectral responses for the cameras to see color the same way that people do. One way of getting color out of those cameras is to construct a compromise matrix, which, when multiplied by the black corrected raw image values will yield approximations to the colors the camera saw in a linear form of any desired RGB color space. That color space can be simply converted to a gamma-corrected color space like sRGB or Adobe (1998) RGB by applying the proper nonlinearity to each color plane.

How do we construct such a compromise matrix? We construct a target consisting of squares of constant color. We light the target evenly with a lamp whose spectrum is known and similar to that which will be used in actual photography, and measure each square’s color with a spectrophotometer. We convert those measurements to a color space which has some pretentions towards perceptual uniformity, such as CIEL*a*b* or CIEL*u*v*. Then we take a picture of the target. We demosaic the raw file, but don’t try to correct the colors. Then we construct (by scientific guessing) a starting point for the compromise matrix. We multiply the values in the image by the test compromise matrix to get the results in our preferred color space, then convert to the same perceptually uniform color space we used when we measured the target with the spectrophotometer.

Now we have the real colors and the measured colors in the same color space. We compare them, measure the differences, come up with some weighting scheme, and produce a single number (which mathematicians, engineers, and color scientists call a scalar) that describes how different the two sets of colors are. We fiddle with the values in the compromise matrix (in a most scientific and serious way), pass the raw image value through the new matrix, recompute our scalar error, fiddle again, and so on until we’re satisfied that the error is as small as we can make it.

Is it really that sloppy and ad hoc? Indeed it is. We engineers and scientists have come up with a name for the class that methods like these belong to: heuristic. Doesn’t that sound better than sloppy and ad hoc?

If we change the lighting much, say from photoflood to electronic flash, take a new set of measurements and a new picture and compute a new compromise matrix, it will be different.

If we change the target colors, the compromise matrix will be different.

If we change the anything in the math that determines the error scalar, such as which are the colors we consider important, whether we want to minimize the average or the worst errors, and when – and if – we decide that a color is so far off that we shouldn’t try to save it, the compromise matrix will be different.

It’s amazing this stuff works at all, but it does.

The Last Word

← Color from non-Luther cameras Compromise matrix construction details →

Trackbacks

  1. Compromise matrix construction details | The Last Word says:
    December 11, 2015 at 4:48 pm

    […] This is a nerdy and mathematical, though equation-free, take on how to create compromise matrices. If you don’t know what a compromise matrix is, start with the link in the paragraph above. If you just want a semi-technical view from 30,000 feet, look here. […]

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • JimK on Goldilocks and the three flashes
  • DC Wedding Photographer on Goldilocks and the three flashes
  • Wedding Photographer in DC on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Fujifilm GFX 100S II precision
  • Renjie Zhu on Fujifilm GFX 100S II precision
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.