• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / The Last Word / Photoshop color space conversion accuracy — 16M colors image

Photoshop color space conversion accuracy — 16M colors image

October 2, 2014 JimK Leave a Comment

Bruce Lindbloom has another test image on his website: a 16 megapixel image containing all possible RGB colors in an 8-bit per plane RGB space. The image is arranged with the colors in a regular order, so you can see where particular colors of interest are:

RGB16Million

I brought the image into Photoshop and changes the mode to 16-bit color. That did not give me the entire RGB gamut, since the 8 least significant bits of the values that had been 255 were assigned 0’s instead of 1’s, but it was close enough. I could have tweaked the image in Matlab to have it use the entire gamut, but I was afraid that I would inadvertently stumble into an implantation issue in Ps’s 16-bit image processing.

I assigned a sRGB profile to the newly 16-bit image in Ps, and saved it. Then I converted it to Adobe (1998) RGB with dither off, black compensation off, and a rendering intent of absolute colorimetric.

I brought both images into Matlab, converted them to Lab with a D65 white point using the 2 degree observer, and computed the pixel-by-pixel DeltaE. I got an average error of 0.0217, a standard deviation of 0.0552, and a worst-case error of 1.2394.

It is interesting that the worst case error for this image in substantially less than for the synthetic image of Bruce Lindbloom’s desk. For the moment, my working hypothesis on that is that is has to do with my 16-bit, 16 million color image not using quite the entire sRGB gamut.

Where do the biggest errors occur? I scaled the deltaE image so that the maximum value was unity, converted it to gamma = 2,2, and here’s what it looks like:

labDiffImagesrgb2argb

 

There are errors near the black point where there is little blue. There are errors when blue and green are low that get worse when red is high, and better as blue goes up. There are low-level errors when blue is high.

If we do the above conversion in Matlab quantizing to 16 bit unsigned integer precision, we get much lower errors. The average is 0.0011, the standard deviation is 0.00005, and the worst case is 0.0047.

If we look at the normalized and gamma-corrected error image, we see a pronounced lack of “hot spots”:

labDiffImagesrgb2argbint

Zooming in, it looks like this:

labDiffImagesrgb2argbintz

When we make the complete round trip in Photoshop from sRGB to Adobe RGB and back, we get lower errors: an average error of 0.0021, a standard deviation of 0.0058, and a worst-case error or 0.2160. That leads me to believe the the one-way errors we see above may be due to differences between the Ps and Matlab implementations of the sRGB nonlinearily definition, the Adobe RGB nonlinearity definition, or both.

Here’s what the error image looks like,, normalized to the worst-case error:

labDiffImagesrgb2srgb2srgb

If we zoom in, we see this:

labDiffImagesrgb2srgb2srgbz

 

The same round trip in Matlab gives us these errors: average = 0.00006, standard deviation = 0.00008. worst-case = 0.0077.

The error image is:

labDiffImagesrgb2argb2srgbint

 

A closeup looks like this:

labDiffImagesrgb2argb2srgbintz

Again, the Photoshop errors look more systematic.

What if we bring the sRGB image into Ps, convert it to Lab and back, and look at the differences? Now the average error is 0.0040, the standard deviation is 0.0033, and the worst-case error is 0.0759, which is quite credible.

Here’s what the error image looks like — remember, the errors are magnified since the worst-case error, the controller of the normalization, is so low:

labDiffImagesrgb2lab2srgb

Here’s a closeup of the upper left corner, with the errors multiplied by 20:

labDiffImagesrgb2lab2srgbz

This looks pretty good. Overall, the Photoshop errors are higher than I got doing the equivalent calculations in double precision floating point, and there is some patterning in the error image, but the Photoshop errors look pretty good when judged on an absolute scale.

 

The Last Word

← Photoshop color space conversion accuracy Photoshop color space conversion accuracy with random colors →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • JimK on Goldilocks and the three flashes
  • DC Wedding Photographer on Goldilocks and the three flashes
  • Wedding Photographer in DC on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Fujifilm GFX 100S II precision
  • Renjie Zhu on Fujifilm GFX 100S II precision
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.