• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / The Last Word / CC24 vs CCSG CIELab values

CC24 vs CCSG CIELab values

March 4, 2020 JimK 14 Comments

The last two posts have been about my experiences using the X-Rite CC24 and CCSG targets and X-Rite’s Color Checker Camera Calibration program with both diffuse and direct solar illumination. I found anomalies between the two targets, which I thought might be related to lighting and glare. But today I discovered that, even with the same 45/0 lighting and a spectrophotometer, the CC24 patches on the CCSG target don’t match the CC24 patches on the actual CC24 target.

I used an ancient X-Rite i1 Pro (model 1, version D) spectrophotometer, and BabelColor PatchTool. Here’s a visual indication, with the CC24 patches in the upper left, and the CCSG patches in the lower right.

I couldn’t figure out how to export the above image from PatchTool — I’m new to it — so it’s a screen capture in my Eizo workstation monitor’s space, which is roughly Adobe RGB. Here’s a histogram of  the CIELab Delta E differences between the two patch sets.

 

And here are the details. The green values are from the CC24, and the pick ones came from the CCSG:

 

Well! That explains a lot. I’ll be doing more testing and will circle back around to the accuracy tests I tried to do in the first two posts.

The Last Word

← Profiling a GFX 100 with X-rite CC calibration, part 2 Designing color profiles for strobe lighting →

Comments

  1. Iliah Borg says

    March 4, 2020 at 12:24 pm

    “Using reference data from X-Rite, there is an average DeltaE*ab difference of about 7 (3.7 in CIEDE2000) between the standard ColorChecker and the corresponding patches of the ColorChecker SG. This has been confirmed with measurements on two charts (the measured values of these two charts are very close to one another however, which is good since it shows manufacturing consistency). We see the same statistics when comparing the two charts BEFORE or AFTER the November 2014 formulations changes. Because of this, it is not recommended to use the average spectral data, the RGB and L*a*b* values, and the images of the standard ColorChecker for comparison with the “equivalent” patches of the ColorChecker Digital SG chart.”
    from http://www.babelcolor.com/colorchecker-3.htm#CCP3_SGproblem

    On the matter why I think that SG charts should be custom-measured: using the same spectrometer, I measured differences between 2 SG charts. Ignoring all border parches, I got
    11 patches with dE00 > 2,
    8 pathes have da > 2
    11 patches have db >2

    Reply
    • JimK says

      March 4, 2020 at 5:33 pm

      Thanks for all your help on this, Iliah. Here’s what I’ve done today:

      I’ve updated optprop with my measured values of one of my CC24’s and my CCSG in addition to the supplied pre-2014 values that came with the package.

      I’ve ordered an i1 Pro 3 (I hope that PatchTool recognizes it).

      I plan to scan my three CC24’s (including the one that arrived today) when the new i1 comes. I’ll rescan my CCSG. I’ll compare my CCSG results with the ones I got from you. Only after that will I go back to the profiler testing.

      Reply
      • Iliah Borg says

        March 4, 2020 at 7:01 pm

        Sorry to bring unwelcome news, but i1 Pro 3 may be not supported directly.

        Reply
        • Erik Kaffehr says

          March 4, 2020 at 9:02 pm

          The X-rite firmware can probably export data in some format readable by Patchtool.

          Sorry to hear you need to test reference cards…

          Reply
    • Tony Arnerich says

      March 5, 2020 at 9:05 am

      With the Classic and SG cards side by side in the same image the differences of the two sets of 24 are indeed quite readily visible.

      However profiles that I made from each card in Lumariver, and then examined for differences in the resulting colors of the Classic’s of patches as presented in Lightroom, were able to contain the colors a little better than what one might fear from Iliah’s reported average delta E of 7. Minimum/Median/Maximum differences I obtained were L: -0.90/0.00/0.30, a: -3.80/0.00/2.90, and b: -4.70/-0.25/3.20. The Pythagorean distances (i.e., not the current delta-E calculations) were 1.55 median and 4.75 maximum.

      Maybe it doesn’t necessarily matter that the 24-patch subset of the SG isn’t a clone of the Classic (as long as one has access to accurate reference values of course).

      Reply
      • JimK says

        March 5, 2020 at 11:13 am

        With the Classic and SG cards side by side in the same image the differences of the two sets of 24 are indeed quite readily visible.

        I’ve noticed that, but I didn’t know whether to blame inadequacies in the lighting or not. When you can see it with a spectrophotometer with 45/0 geometry, as Iliah and I both have, that’s a smoking gun.

        Maybe it doesn’t necessarily matter that the 24-patch subset of the SG isn’t a clone of the Classic (as long as one has access to accurate reference values of course)

        I think that’s true, but if the target lighting is other than D50, to measure conformance, we need the CCSG reflectance spectra, which are not published by X-Rite. Not everyone is willing and able to do their own spectral measurements.

        Reply
      • Iliah Borg says

        March 5, 2020 at 11:32 am

        Dear Tony:

        > Iliah’s reported average delta E of 7

        I was quoting folks from BabelColor. And, by the way, they literally made it their business to deal with profiling targets and measurements.

        > However profiles that I made from each card in Lumariver

        Using respective standard references, right?

        > Minimum/Median/Maximum differences I obtained were L: -0.90/0.00/0.30, a: -3.80/0.00/2.90, and b: -4.70/-0.25/3.20.

        Having 6.7 da and 7.9 db absolute values IMHO that doesn’t look very good, pardon the pun 😉

        Reply
        • Tony Arnerich says

          March 7, 2020 at 10:54 am

          >Dear Tony:
          >> Iliah’s reported average delta E of 7
          >I was quoting folks from BabelColor. And, by the way, they literally made it their business to deal with profiling targets and measurements.

          They’re the best reference that I am aware of, and I’ve been using their “average of 30 cards” values as my gold standards.

          >> However profiles that I made from each card in Lumariver
          >Using respective standard references, right?

          Yes, I simply accepted their stored values.

          >> Minimum/Median/Maximum differences I obtained were L: -0.90/0.00/0.30, a: -3.80/0.00/2.90, and b: -4.70/-0.25/3.20.
          >Having 6.7 da and 7.9 db absolute values IMHO that doesn’t look very good, pardon the pun

          As it happens Lightroom’s Adobe Color profile didn’t do much better if at all.

          Here are my delta-E2000 figures, min / median / max:

          AdobeColor vs Babel30: 0.79 / 2.71 / 6.98
          Luma24 45° vs Babel30: 0.73 / 3.33 / 6.50
          LumaSG 45° vs Babel30: 0.97 / 3.55 / 6.55
          LumaSG 45° vs Luma24 45°: 0.46 / 1.10 / 2.30
          AdobeColor vs Luma24 45°: 0.80 / 2.79 / 5.25
          Luma24 45° vs Luma24 20°: 0.37 / 0.69 / 1.83
          LumaSG 45° vs LumaSG 20°: 0.42 / 4.32 / 11.85

          SG 20° vs Babel30 has da = 38 and db = 57.

          All measurements were made on the Classic card’s 24 patches within a shot made with the recommended 45° sun angle setup.

          “LumaXX 45°” used profiles I made from a shot taken in the recommended technique. “LumaXX 20°” used profiles made from a shot taken with the purposely bad technique of too small a sun angle. The surface glare was readily apparent on the SG card.

          It looks like my Classic card disagrees too much with BabelColor’s average.

          Reply
          • JimK says

            March 7, 2020 at 11:33 am

            They’re the best reference that I am aware of, and I’ve been using their “average of 30 cards” values as my gold standards.

            That was for pre-November-2014 CC24s.

            Reply
  2. Erik Kaffehr says

    March 5, 2020 at 1:30 pm

    I would think that the reference data files in LumaRiver Profile Designer are pretty good.

    C:\Program Files\Lumariver Profile Designer\data

    I would recall that I have used those as reference data in my comparisons.

    I had a big fight with Argyll CMS and also i1Studio, the first one doesn’t find my ColorMunki Photo the other one severely miscalibrates my monitor, like Delta E 200 (I am just guessing).

    I may mention that I had a sobering experience, once upon the time. It was just a macro shot of a flower, but I had severe posterisation on some yellow colors. It was like this:

    Adobe Standard – OK
    Adobe DNG Profile Editor, with tweaks – posterisation
    Capture One – posterisation
    CPP – posterisation
    DCamProf – OK

    https://www.getdpi.com/forum/sony/55875-case-bad-posterisation.html?highlight=

    This thread may have some interesting content, as many knowledgeable folks has chimed in:
    https://www.getdpi.com/forum/medium-format-systems-and-digital-backs/59120-capture-one-lr6.html?highlight=

    Best regards
    Erik

    Reply
  3. Tony Arnerich says

    March 8, 2020 at 9:55 am

    “That was for pre-November-2014 CC24s.”

    Fair point, but the wrinkles I’m hoping to address are 4X the differences between 2014 (0.15/0.81/1.50, min/med/max).

    I hypothesized: “It looks like my Classic card disagrees too much with BabelColor’s average.”

    That’s not going to resolve things. The ColorMunki Photo says that my CC24 is off from Babel30 by only 0.19/0.65/1.44 and away from the >2014 version by 0.17/0.94/1.90.

    Reply
    • JimK says

      March 8, 2020 at 11:50 am

      I was talking about the gold standard part.

      Reply

Trackbacks

  1. Profiling a GFX 100 with X-rite CC calibration, part 2 says:
    March 4, 2020 at 11:35 am

    […] ← Profiling a GFX 100 with X-rite CC camera calibration CC24 vs CCSG spectra and CIELab values → […]

    Reply
  2. Profiling a GFX 100 with X-rite CC camera calibration says:
    March 4, 2020 at 11:39 am

    […] https://blog.kasson.com/the-last-word/cc24-vs-ccsg-spectra-and-cielab-values/ […]

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • bob lozano on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Goldilocks and the three flashes
  • DC Wedding Photographer on Goldilocks and the three flashes
  • Wedding Photographer in DC on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Fujifilm GFX 100S II precision
  • Renjie Zhu on Fujifilm GFX 100S II precision
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.