• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / Color Science / Optimal CFA spectral response

Optimal CFA spectral response

March 2, 2022 JimK 16 Comments

There is a widespread belief about Bayer color filter array (CFA) response spectra. It goes something like this:

  1. The old CCD cameras used to have great color.
  2. The reason they had such good color is the dye layers in the CFA were thick.
  3. Thick dye layers led to highly selective (narrow band) spectral responses.
  4. Modern CMOS cameras have lousy color.
  5. The reason is that they are trying to reduce photon noise in the images.
  6. Making the CFA filters less selective reduced the photon noise, but made the colors bad.

I have seen no evidence that the color is better  — which, for the purpose of this post, I’ll define as more accurate — on old CCD cameras than it is on new CMOS ones.  I’ve owned and used several old medium format CCD cameras and backs, and I don’t think the color was more accurate.

I decided to apply some analysis to the issue. Jack Hogan has demonstrated that you can get pretty darned close to a Macbeth CC24 Sensory Metamerism Index (SMI) of 100 with three Gaussian spectral responses of well-chosen peak wavelengths and standard deviations. I fired up a Matlab simulator that started a long time ago with some code I got from Jack, made some changes, and found such an optimal CFA response set for the CIE 1931 2-degree Standard Observer:

There are four quadrants in the above chart. On the top right is the response of the optimal Gaussians. The standard deviations in nanometers are given in the subtitle. To the left of that is the error, in CIELab DeltaE 2000, for each of the Macbeth CC24 patches, with an optimal compromise matrix. The overall SMI, 99.3, is shown. A perfect SMI is 100. I’ve never seen a real consumer camera get above the low 90s. Most real cameras are in the 80s.  Below that, on the left, in red, is the response of the simulated camera and compromise matrix to a range of spectral inputs spanning the visible wavelengths. The blue curve is the correct response.

The bottom right graph needs some explication. I simulated a camera with a full well capacity of 10,000 electrons, or a camera with a FWC of 40,000 electrons at two stops above base ISO. I simulated an exposure sufficient to illuminate the raw values of the lightest gray patch to 95% of full scale. I simulated a 40,000-pixel patch, and, assuming a Poisson distribution for photon counting, measured the mean chroma noise in CIElab DeltaAB. The average for all 24 patches is a bit over 6 DeltaAB.

Now let’s tighten up the standard deviations, and make them all 20 nm. Then we’ll search for the optimal wavelengths for the peaks:

 

The color accuracy is much worse, but about the same as many consumer cameras, and the noise is slightly better. This is with the same exposure as the top chart. Since the filters are more selective and less light is hitting the sensor, the signal to noise ratio of the photon noise is worse, but the compromise matrix means that that degradation doesn’t result in greater chroma noise.

If we tighten the standard deviations down to 10 nm and reoptimize the peak locations, here’s what happens:

Now both the SMI and the chroma noise is worse, and the locus of spectral colors is very short.

What happens if we make the standard deviations large?

The SMI isn’t too bad, but the chroma noise is worse, in spite of more light hitting the sensor. The spectral coverage looks pretty good.

Here’s the reason for the increased chroma noise:

Note the size of the off-diagonal terms.

Here are my conclusions from this exercise:

  1. Too much overlap is bad for color accuracy.
  2. Too little overlap is bad for color accuracy.
  3. For optimal SMI performance, the red and green center wavelengths should be fairly close.
  4. For optimal SMI performance, the red and green overlap should be larger than we see in most cameras.
  5. You can achieve remarkably high SMIs — much higher than we see with consumer cameras — with simple Gaussian spectra.
  6. The CFA spectra for high SMI isn’t that far off the CFA spectra for low chroma noise.
  7. My guess is that the reason we don’t have better SMIs in consumer cameras is the availability of chemical compounds, not noise considerations.

 

Caveats:

  • I optimized and tested with the same patch set. This is not ideal.
  • 24 patches is a small number.
  • I didn’t simulate demosaicing.

 

Color Science, The Last Word

← Nikon Z9 manual focus by wire Microcolor and CFA spectra →

Comments

  1. N/A says

    March 2, 2022 at 4:44 pm

    hmm… people talk about “color” after LUT profiles, not after linear matrix color transform… so granted when you have good results with plain matrix it is telling but still not the same and then nice gaussian curves still not exactly the same as real SSFs – who knows may be “wider” “real” curves are indeed worse than “thin/selective” “real” curves – vs what you show with non-real SSFs ?

    Reply
    • JimK says

      March 2, 2022 at 5:41 pm

      You can use LUTs to correct errors in linear transforms. I would think that CFAs that need little correction would be better than those that need a lot.

      Reply
      • N/A says

        March 10, 2022 at 6:33 pm

        you can, but for example this is not what ACR/LR do (for quite a long time) in their DCP profiles supplied by Adobe … their matrix transforms (pre LUT) are intentionally are not optimal (if left alone)… so in real life LUT do not correct errors they actually do the job

        Reply
        • JimK says

          March 10, 2022 at 7:55 pm

          None of the Adobe profiles are even attempting to create accurate color.

          Reply
    • JimK says

      March 2, 2022 at 8:20 pm

      Let me expand on the above. If a camera meets the Luther-Ives condition, which means that the CMF/sensor spectra are a linear transform of CIE XYZ, that means that the camera will not see colors that appear different to a color-normal observer as the same color. If the camera sees two different colors as the same color, no amount of LUT tweaking will separate them.

      Reply
  2. CarVac says

    March 3, 2022 at 5:01 am

    In implementing white balance in my photo editor Filmulator, I noticed one specific difference between old camera models and newer ones: new cameras have much more overlap between the green and blue color channels, so that the white balance multiplier for the blue channel can be less extreme in warm light conditions.

    It is the difference between a blue channel multiplier of 4 or so for a modern sensor and perhaps upwards of 10 for an older 2005-era sensor, which can make a huge difference in perceived noise in real-world low light conditions.

    Reply
  3. Jack Hogan says

    March 4, 2022 at 7:14 am

    Excellent Jim, your results in the first Figure are pretty close to what I got. What minimization criteria did you use to determine the ideal gaussian means and standard deviations?

    Reply
    • JimK says

      March 4, 2022 at 7:58 am

      Just the simplex method. It’s possible there are global maxima for CRI that are greater than 99/3, but the curves are pretty smooth.

      Note that I used a different Standard Observer than you did.

      Reply
  4. Tucker Downs says

    April 1, 2022 at 12:03 pm

    I’d be curious to see your results with a much larger dataset than the CC24, but great work and simple demonstration nonetheless. And would you look at that, your ideal gaussians are extremely similar to the LMS cone sensitivities. Not surprising at all. XYZ based on color matching experiments is a good practical approximation of our color sensitivities, but the true inputs to our visual system are strictly the positive-value absorption spectra of the cones. None of the red “bump” in short wavelengths that is present in the XYZ system (for good reasons).

    Anyway to make a long story short, if allow for a freeform optimization that tweaks each of those sensitivity curves at 10nm intervals, using your ideal gaussians as the starting data. I’m sure your match will get even closer to the cone sensitivities.

    Glad I checked back in on your blog! As a color scientist I look forward to reading the next few posts in this sequence this afternoon.

    Reply
    • JimK says

      April 1, 2022 at 12:21 pm

      Thanks. Stay tuned for lots more patch sets.

      Reply
    • JimK says

      April 1, 2022 at 12:54 pm

      You may be interested in this thread.

      https://www.dpreview.com/forums/post/66037974

      Reply
  5. Tom Lewis says

    March 1, 2024 at 1:44 am

    Hi Jim,

    I’m interested in false color imaging derived from the near infrared. I have learned that some folks have used filters from Midwest Optical and MaxMax for this purpose. They take three photos with the filters, convert each to monochrome, and then assign to BGR in an application like Photoshop. Seems to me an objective should be to have sufficient overlap and shallow enough filter skirts to provide a wide variety of resulting visible color. But since this is false color, I can’t imagine how there could be serious criteria for color accuracy.

    I have noticed that most of the filter responses for real cameras designed for visible light have overlap between 40% and 80%, and are peaky (non flat passbands). While the Midwest Optical bandpass claim to be gaussian, their spacing and standard deviation don’t appear to provide for enough overlap. In the case of the MaxMax set, the overlap occurs around 10%, and two of their filters look to me to not be designed to be gaussian.

    https://midopt.com/filters/bandpass/
    http://www.maxmax.com/filters/bandpass-ir

    Do you have any thoughts on optimum filter combinations for humans to see purely the near infrared with false color?

    Tom

    Reply
    • JimK says

      March 1, 2024 at 7:14 am

      Since color accuracy doesn’t apply here, then you can use any filter skirts you please. However, there should be some overlap to avoid posterization. Keep in mind that, by 850 nm, most CFA filters are transparent, so for long lambda IR, you don’t need to convert the camera.

      Reply
  6. Tom Lewis says

    March 1, 2024 at 8:48 am

    Thank you, Jim.

    At around what percent transmission do think would be good for the overlap to minimize posterization?

    When I suggested non-vertical skirts would be a benefit, I was thinking these would provide encoding for intermediate colors. In contrast, if the filters had absolutely flat pass bands and absolutely vertical skirts, then it seems to me the system would only be able to encode three discrete colors in total, and there would be no intermediate colors. I was thinking to distinguish intermediate color, encoding a value into more than one wavelength band would be necessary.

    When you mentioned no need to convert the camera for long lambda IR, I’m assuming you are referring to conversion to monochrome, and not referring to conversion to full spectrum. Is that right?

    Reply
    • JimK says

      March 1, 2024 at 9:12 am

      At around what percent transmission do think would be good for the overlap to minimize posterization?

      There should be material contribution from at least two bands at every wavelength.

      When I suggested non-vertical skirts would be a benefit, I was thinking these would provide encoding for intermediate colors. In contrast, if the filters had absolutely flat pass bands and absolutely vertical skirts, then it seems to me the system would only be able to encode three discrete colors in total, and there would be no intermediate colors. I was thinking to distinguish intermediate color, encoding a value into more than one wavelength band would be necessary.

      We’re on the same page here.

      When you mentioned no need to convert the camera for long lambda IR, I’m assuming you are referring to conversion to monochrome, and not referring to conversion to full spectrum. Is that right?

      Right.

      Reply

Trackbacks

  1. Cameras Emulating Cameras - Alchemy Color says:
    May 24, 2024 at 9:54 am

    […] narrower RGB filters result in better color? The great Jim Kasson tested this theoretically in Matlab and came to the conclusion that it’s not necessarily […]

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • bob lozano on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Goldilocks and the three flashes
  • DC Wedding Photographer on Goldilocks and the three flashes
  • Wedding Photographer in DC on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Fujifilm GFX 100S II precision
  • Renjie Zhu on Fujifilm GFX 100S II precision
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.