Sometimes I look at the search terms people use when they wind up on this blog. The title to today’s post is an example. It’s a question that initially seemed pretty simple to me, but appeared more substantial the more I think about it. At first blush, the answers appear to be:
- You don’t want to clip colors your printer can print.
- You want to have images with as much data as possible, even if you can’t see or use that data now.
- ProPhotoRGB has other advantages, such as minimizing hue shifts when applying curves.
There are pretty obvious reasons not to use ProPhotoRGB, but they’re not compelling:
- If you’re using eight-bit per color plane color depth (you’re not still doing that, are you?), you can get contouring in a big color space like ProPhotoRGB.
- You will have to change color space to export to eight-bit-per-plane-color-depth spaces like jpg, or run the risk of contouring.
However, the issue that I’m wrestling with is the same one that may be troubling the Google searcher, or at least a variant of it: how do we do WYSIWYG image editing if our monitors can’t display some colors in the image? We can edit those colors out, but that’s not a satisfying solution, because that’s dumbing down our printers. We can just cross our fingers and hope we like what comes out of the printer. We do that for other failures of WYSIWYGness. I guess we just need to add this to the list.
There’s another, related issue. If we’re using a color space with a gamut much larger than our printer’s, how do we keep track of what colors are out of gamut and how those colors will be mapped? The way I do it is to make friends with the gamut alarm and soft proofing features of Photoshop. Now that Lightroom has soft proofing, we don’t need to go to Photoshop to get similar information. What if you have an image with out-of-gamut colors? It’s nearly always better to do the gamut mapping yourself rather than let the color management software do it, but you don’t want to dumb down your image to what your present printer can handle. Photoshop layers and Lightroom virtual copies to the rescue.
I’m not entirely happy with my current analysis of this situation, and any help is appreciated.
Marc says
Hi Jim,
One additional issue that rarely gets addressed is that of “temporarily” clipping data during editing. What happens if I make an editing step that pushes some colors out of the gamut of my editing space and I make another step to (partly) reverse that?
In a fully “nondestructive” workflow that probably shouldn’t be an issue, but I believe there rarely is such a workflow unless you only use your RAW converter for editing.
As an example, the color gamut of my printer seems to be rather limited in the shadow areas. So what if I increase my global contrast in one step, thereby making my darks even darker and then lighten some of the shadows again while dodging and burning.
If I do both steps in my raw converter, I may be fine. But what if I use my raw converter for the contrast adjustment and send a tiff to Photoshop for the dodging and burning steps? I see these as non-theoretical scenarios where some dark saturated colors may be clipped when using a smaller color space in my tiff which I would be able to recover when using ProPhoto. Or am I missing something?
Similar problems could occur any time an actual pixel layer (e.g. for a plug-in) is used in PS when using a smaller color space. I don’t know how it works when only using adjustment layers.
Marc
Jim says
Good point, Marc. With Lightroom, you don’t get a choice; the working space uses the ProPhotoRGB primaries, but with a gamma of one. You can’t ever see that, not even on the histograms, which are tweaked to make the gamma look like 2.2 or so.
In Photoshop, you could easily produce a layer stack that clipped at some intermediate point. Using a big, big space like ProPhotoRGB is good insurance. Not entirely foolproof, but a good idea nonetheless.
Jim
Anonymous says
Don’t bother to to understand which are contented to be with. Make friends who will stress players to jimmy your upwards.
Jim says
Zen.
Richard Lynch says
There are limitations on monitors. There are limitations on capture. There are limitations to output. There are pitfalls to using varied software to edit images because each change might be a conversion. There are NOT more colors in ProPhoto (there are the same number, but mapped differently so long as we are looking at the same number of bits). That detail is almost always missed by proponents of “larger” color spaces.
If you are aware what happens in an audio system when the amplitude of sound can’t be handled by part of the chain… there can be audible distortion (and potentially damage to the equipment). The old “solution” to this issue was to keep your attenuator under one o’clock and never turn it louder. Well, there was all that extra attenuation that you never got to use, and The Who might have sounded really cool if you could get it up to that 130db of a concert… Your speakers might be trashed afterwards… The point is not that you will do damage, but something happens behind the scenes.
I have considered these issues for literally 20 years, and I still end up using what is essentially an sRGB workflow. Otherwise I edit what I can’t see. Technology may catch up, and that’s why I archive my originals, but if your output is to other monitors (the web) and to non-photographic media (CMYK print) there may be no advantage to using any space larger than AdobeRGB currently except psychological aspects.
Better corrections might get you more of what you want in the result than depending on a color space which is really getting those results by accident.