In yesterday’s post, we saw that model-based color space conversion accuracy as performed in Matlab using 64-bit floating point intermediate values, are dominated by the quantizing error of 16-bit per color plane images.
But what about the accuracy such conversions in Photoshop? I took a look.
I loaded the same test image that I used for yesterday’s experiments into Ps (thanks to Bruce Lindbloom for the image):
I took the image from its native sRGB to Adobe (1998) RGB and back, using the Adobe (ACE) color engine, then I loaded the original image and the round-trip image into Matlab, converted both to Lab, and computed the deltaE for each pixel. Then I did that a few more times, starting with the last conversion and always comparing the latest image to the original one. I computed some stats on the deltaE image, and here’s what I got:
That’s odd. There’s a fair amount of error — the worst case is about 6 DeltaE — but it doesn’t get much worse after the first iteration.
I made another graph using the result of the first round trip as a reference:
That’s more what I expected the results to look like. What’s causing the large error on the first iteration? I looked at the DelatE image with the original image as the reference and the first iteration as the comparison, normalized so that the worst-case is full scale, and with a gamma of 2.2 added:
The worst errors are in the lightest areas. The worst of the worst occur in dark areas, for the most part, but not all dark areas show high errors — the top part of the picture is dark and shows low errors. Dark blue seems to be difficult; the worst Macbeth chart error is in the dark blue patch, followed by the red one. The errors seem to occur at enough different area in the image to rule out gamut clipping, which shouldn’t happen with this pair of color spaces anyway.
I went back to the original image, converted it to Adobe RGB in Photoshop, and compared it to the original after both were converted to Lab. The errors were very close to those of the first round trip, meaning that we lost almost all the accuracy we were going to lose going from sRGB to Adobe RGB.
What gives? The red and blue primaries for sRGB are the same as those in Adobe RGB, and the AdobeRGB green primary is such that the gamut of sRGB in xy or u’v’ chromaticity space is entirely contained within the Adobe RGB gamut in those spaces. The two spaces share a white point. The nonlinearities may be different, but that shouldn’t affect the gamut.
Just to make sure, I converted the original image to Adobe RGB in Matlab, and measured the difference in Lab. Infinitesimal.
Then I took the original image to Lab in Photoshop, then back to sRGB, and got materially the same large errors. I did that iteration several more times, comparing with the result fo the first round trip, and got this:
Just like with the sRGB>Adobe RGB>sRGB Photoshop conversions, it’s the first conversation that causes the main errors.