• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / Color Science / Observer metameric error in simulated cameras

Observer metameric error in simulated cameras

April 28, 2022 JimK Leave a Comment

In a previous post, I described how I had created a Matlab program to analyze sets of reflectance spectra, generate basis functions, and then use those basis functions to produce metamer sets, all with an arbitrary fixed color. For review, here is the methodology I used:

  1. Using principal component analysis, find a set of basis functions for the sample set
  2. Assuming the set is lit with a particular illuminant (I used D50 for the data I’m presenting here), find as many spectra as you specify by combining the basis functions.
  3. All of the spectra will resolve to the same color for a CIE 1931 2-degre observer. They are therefore metamers.
  4. I set the boundary conditions to exclude spectra with over 100% or less than 0%  reflectance.

I set up my camera simulator to allow training on any patch set in my growing collection, and testing against any patch set that I have, using a compromise matrix optimized during the training phase for any camera for which I have a set of spectral response curves. I made the testing set metamers of a color that I chose for the simulation run. So, although there were hundreds of spectra in the testing set, they all resolved to a single color for the 2012 two-degree standard observer (not the 1931 SO).

Here’s the results of a run with the metamer color L* = 50, a* = 0, and b* = 0, with the cameras all trained on those metamers:

The names of the cameras are across the bottom of the graph. The vertical axis is the mean color error in CIE DeltaE 2000. Take special note of the camera on the far right. It’s not a real camera, but one that I created using optimal Gaussian sensor sensitivity functions(SSFs). In the past, it has produced more accurate results in the simulations than any real camera for which I have the SSFs.

Here are the standard deviations (sigmas) of the colors produced by the cameras:

And the worst-case results:

 

Note by how much the worst results are worse than the standard deviations. This distribution has a long tail.

Now I’ll plot the sigmas against the means for all the cameras;

The conclusion is that the more accurate a camera is, the less likely it is to suffer from significant observer metameric error.

That was a pretty restrictive training set. What if we train with the natural colors from which the basis functions were generated, and test with the same metamer set?

The mean errors are much worse. The optimized camera is clearly best.

The sigmas are also much worse. The optimized camera is clearly best.

The worst case errors are also worse. The optimized camera is clearly best.

The correlation between the mean errors and the sigmas is reduced, but still strong.

If we train with the Macbetth ColorChecker 24:

 

The mean errors are somewhat worse than when we trained with the natural color set.

The sigma errors are slightly worse than when we trained with the natural color set.

 

The worst-case errors are slightly worse than when we trained with the natural color set.

There is still a strong correlation between the means and sigmas.

What if we pick another target color, this time L* = 50, a* = 50, and b* = 50? This is a quite-chromatic red.

 

 

 

 

Here are the same graphs for L* = 50, a* = 50, and b* = -50:

 

 

 

I’ll be doing some more work with this, but my initial, tentative conclusions are:

More accurate cameras produce less observer metameric error than less accurate ones. (That’s not a surprise)

Training on something close to the testing set helps both mean errors and observer metameric error. (Also not a surprise)

Training on the CC24 isn’t all that much worse than training on the set from which the basis functions were generated. (That is somewhat of a surprise)

 

Color Science, The Last Word

← Nikon Z9 goes to an event Observer metameric error in simulated cameras, part 2 →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • bob lozano on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Goldilocks and the three flashes
  • DC Wedding Photographer on Goldilocks and the three flashes
  • Wedding Photographer in DC on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Fujifilm GFX 100S II precision
  • Renjie Zhu on Fujifilm GFX 100S II precision
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.