• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / The Last Word / Chained color space conversion errors with many rgb color spaces

Chained color space conversion errors with many rgb color spaces

October 10, 2014 JimK Leave a Comment

[Note: this post has been extensively rewritten to correct erroneous results that arose from not performing adequate gamut-mapping operations to make sure the test image was representable within the gamut of all the tested working spaces.]

I’ve been trying to come up with a really tough test for color space conversion testing, one that, if passed at some editing bit depth, would give us confidence that we could freely perform color space conversions to and from just about any RGB color space without worrying about loss of accuracy. I think I’ve  found such a test.

I picked 14 RGB color spaces that, in the past, some have recommended as working spaces, although several of them are obsolete as such:

  1. IEC 61966-2-1:1999 sRGB
  2. Adobe (1998) RGB
  3. ProPhoto RGB
  4. Joe Holmes’ Ektaspace PS5
  5. SMPTE-C RGB
  6. ColorMatch RGB
  7. Don-4 RGB
  8. Wide Gamut RGB
  9. PAL/SECAM RGB
  10. CIE RGB
  11. Bruce RGB
  12. Beta RGB
  13. ECI RGB v2
  14. NTSC RGB

If you’re curious about the details of any of these, go to Bruce LIndbloom’s RGB color space page and get filled in.

I wrote a Matlab script that reads in an image, assigns the sRGB profile to it, then computes from it an image that lies within all of the above color spaces. It does that with this little bit of code:

colorspac clipping

This script didn’t pull the gamut in far enough that the buildup of double precision floating point round-off errors didn’t cause colors to be generated that were out of the gamut of some of the color spaces. I added another gamut-shrinking step:

gamutSmoosh

This code shrinks the gamut somewhat in CIELab. I could probably get away with less shrinkage, but I got tired of watching the program go through many iterations before it finally threw a color out of gamut, forcing me to start all over again.

Here’s the sRBG image before the gamut-constraining process:

origScaled

And here it is afterwards:

gamutSmooshImage

Here’s the difference between the two in CIELab DeltaE, normalized to the worst-case error, which is about 45 DeltaE, and a gamma of 2.2 applied:

gamutSmooshErrorImage

After the gamut-constraining, the program picks a color space at random, converts the image to that color space algorithmically (no tables) in double precision floating point, quantizes it to whatever precision is specified, measures the CIELab and CIELuv DeltaE from the original image, then does the whole thing again and again until either the computer gets exhausted or the operator gets bored.

Here’s what happens when you leave the converted images in double precision floating point:

rand14DPFP

The worst of the worst is around 5 trillionths of a DeltaE.

If we quantize to 16 bit integers after every conversion:

rand14-16bit

The worst case error is less than a tenth of a DeltaE, and the mean error is a little over 1/100th of a DeltaE.

With 15-bit quantization, here is the situation:

rand14-15bit

More or less the same as with 16-bit quantization, but the errors are twice as bad. The worst-case error doesn’t get over one DeltaE until about 40 conversions, though.

With 8-bit quantization, we see a different story, as the quantization errors dominate the conversion errors get obvious quickly:

rand14-8bit

 

 

The Last Word

← Sequential color space conversions at varying precision Comparing Photoshop and algorithmic color space conversion errors →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • JimK on How Sensor Noise Scales with Exposure Time
  • Štěpán Kaňa on Calculating reach for wildlife photography
  • Štěpán Kaňa on How Sensor Noise Scales with Exposure Time
  • JimK on Calculating reach for wildlife photography
  • Geofrey on Calculating reach for wildlife photography
  • JimK on Calculating reach for wildlife photography
  • Geofrey on Calculating reach for wildlife photography
  • Javier Sanchez on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • Mike MacDonald on Your photograph looks like a painting?
  • Mike MacDonald on Your photograph looks like a painting?

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.