• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / The Last Word / Why use ProPhotoRGB if my monitor can’t show it?

Why use ProPhotoRGB if my monitor can’t show it?

September 28, 2012 JimK 5 Comments

Sometimes I look at the search terms people use when they wind up on this blog. The title to today’s post is an example. It’s a question that initially seemed pretty simple to me, but appeared more substantial the more I think about it. At first blush, the answers appear to be:

  • You don’t want to clip colors your printer can print.
  • You want to have images with as much data as possible, even if you can’t see or use that data now.
  • ProPhotoRGB has other advantages, such as minimizing hue shifts when applying curves.

There are pretty obvious reasons not to use ProPhotoRGB, but they’re not compelling:

  • If you’re using eight-bit per color plane color depth (you’re not still doing that, are you?), you can get contouring in a big color space like ProPhotoRGB.
  • You will have to change color space to export to eight-bit-per-plane-color-depth spaces like jpg, or run the risk of contouring.

However, the issue that I’m wrestling with is the same one that may be troubling the Google searcher, or at least a variant of it: how do we do WYSIWYG image editing if our monitors can’t display some colors in the image? We can edit those colors out, but that’s not a satisfying solution, because that’s dumbing down our printers. We can just cross our fingers and hope we like what comes out of the printer. We do that for other failures of WYSIWYGness. I guess we just need to add this to the list.

There’s another, related issue. If we’re using a color space with a gamut much larger than our printer’s, how do we keep track of what colors are out of gamut and how those colors will be mapped? The way I do it is to make friends with the gamut alarm and soft proofing features of Photoshop. Now that Lightroom has soft proofing, we don’t need to go to Photoshop to get similar information. What if you have an image with out-of-gamut colors?  It’s nearly always better to do the gamut mapping yourself rather than let the color management software do it, but you don’t want to dumb down your image to what your present printer can handle. Photoshop layers and Lightroom virtual copies to the rescue.

I’m not entirely happy with my current analysis of this situation, and any help is appreciated.

The Last Word

← Is your web browser color managed? Backup strategy — a case study →

Comments

  1. Marc says

    October 10, 2012 at 5:26 am

    Hi Jim,

    One additional issue that rarely gets addressed is that of “temporarily” clipping data during editing. What happens if I make an editing step that pushes some colors out of the gamut of my editing space and I make another step to (partly) reverse that?

    In a fully “nondestructive” workflow that probably shouldn’t be an issue, but I believe there rarely is such a workflow unless you only use your RAW converter for editing.

    As an example, the color gamut of my printer seems to be rather limited in the shadow areas. So what if I increase my global contrast in one step, thereby making my darks even darker and then lighten some of the shadows again while dodging and burning.

    If I do both steps in my raw converter, I may be fine. But what if I use my raw converter for the contrast adjustment and send a tiff to Photoshop for the dodging and burning steps? I see these as non-theoretical scenarios where some dark saturated colors may be clipped when using a smaller color space in my tiff which I would be able to recover when using ProPhoto. Or am I missing something?

    Similar problems could occur any time an actual pixel layer (e.g. for a plug-in) is used in PS when using a smaller color space. I don’t know how it works when only using adjustment layers.

    Marc

    Reply
  2. Jim says

    October 10, 2012 at 2:28 pm

    Good point, Marc. With Lightroom, you don’t get a choice; the working space uses the ProPhotoRGB primaries, but with a gamma of one. You can’t ever see that, not even on the histograms, which are tweaked to make the gamma look like 2.2 or so.

    In Photoshop, you could easily produce a layer stack that clipped at some intermediate point. Using a big, big space like ProPhotoRGB is good insurance. Not entirely foolproof, but a good idea nonetheless.

    Jim

    Reply
  3. Anonymous says

    October 27, 2012 at 3:07 pm

    Don’t bother to to understand which are contented to be with. Make friends who will stress players to jimmy your upwards.

    Reply
    • Jim says

      October 27, 2012 at 9:38 pm

      Zen.

      Reply
  4. Richard Lynch says

    October 26, 2013 at 11:24 pm

    There are limitations on monitors. There are limitations on capture. There are limitations to output. There are pitfalls to using varied software to edit images because each change might be a conversion. There are NOT more colors in ProPhoto (there are the same number, but mapped differently so long as we are looking at the same number of bits). That detail is almost always missed by proponents of “larger” color spaces.

    If you are aware what happens in an audio system when the amplitude of sound can’t be handled by part of the chain… there can be audible distortion (and potentially damage to the equipment). The old “solution” to this issue was to keep your attenuator under one o’clock and never turn it louder. Well, there was all that extra attenuation that you never got to use, and The Who might have sounded really cool if you could get it up to that 130db of a concert… Your speakers might be trashed afterwards… The point is not that you will do damage, but something happens behind the scenes.

    I have considered these issues for literally 20 years, and I still end up using what is essentially an sRGB workflow. Otherwise I edit what I can’t see. Technology may catch up, and that’s why I archive my originals, but if your output is to other monitors (the web) and to non-photographic media (CMYK print) there may be no advantage to using any space larger than AdobeRGB currently except psychological aspects.

    Better corrections might get you more of what you want in the result than depending on a color space which is really getting those results by accident.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • bob lozano on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Goldilocks and the three flashes
  • DC Wedding Photographer on Goldilocks and the three flashes
  • Wedding Photographer in DC on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Fujifilm GFX 100S II precision
  • Renjie Zhu on Fujifilm GFX 100S II precision
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.