There are shortcuts, but they don’t all work on all cameras.
One way to get the same information in all channels is to balance to an image where all the pixels are at maximum or minimum. Nikons usually refuse such images as invalid, but this will reportedly often work on Canons. Try taking a picture of the inside of the lens cap or dramatically overexpose a picture of almost anything. To see if you have achieved the desired white balance, look at the red and blue white balance coefficients in the camera’s EXIF data. They should be within five or ten percent of one.
Another shortcut is to use the manual white balance settings to set the white balance to about 3800 K and the color bias as green as it will go. After you do this, make some test exposures and look at the EXIF white balance coefficients. You can tweak the coefficients by adjusting the manual white balance settings. Many cameras don’t have sufficient range in their manual white balance settings to get you all the way to UniWB.
Yet another quick way to get to UniWB in a camera that allows you to copy the white balance setting from an image is to get a copy of an image from that camera with UniWB all set up. You just copy the image to a memory card, but it in the camera, and tell the camera to use the white balance coefficients from the image as a white balance preset. Be sure to name the file in a way that the camera will recognize it as its own.
In value engineering, the first step you consider is eliminating the function. In that spirit, one way to shortcut UniWB is to not do it, but to use some other technique for ETTR. If you’re pretty sure you know where the brightest spot in the image is, and if, as is the case of most highlights, not highly chromatic, you can meter it to set your exposure, and ignore the histogram in the camera. There are several ways to express this metering technique, and different formulations speak to different photographers:
- Meter the highlight and open up three stops
- Place the highlight on Zone VIII
- Set the exposure compensation to +3 stops, and meter the highlight
All of the above are equivalent. I don’t use this technique because I doubt my ability to find the highlight, and find it slower than snapping off an image and looking at the histogram.
Next: Preparing for monitor-based UniWB
skytrader says
hi, nice collection of shortcuts. regarding your last sentence:
try this:
if you have set your camera to exposure 3+ compensation and you meter the scene with the spot meter, just meter different highlight spots. in case of aperture priority mode, the shortest displayed shutter speed is the “highest” highlight!
of course you wont meter any light bulbs, etc… this you would only do when you want to photograph a dark ambient scene in the right mood.
trigger this spot with the shortest sgutter speed displays by your camera meter with AEL and compse/focus and you are done. i cant imagine my RAW photography anymore without UNI-WB.
if you have a SONY or nikon sensore, the problems of the usual DSLR underxposing by crappy JPG-AUTO-WB-histograms are not so fatal as they are with canon sensors, where you cannot correct that much, especially in high ISO.
i use a 1DX and a RX1. even with the fabulous RX1 sensor (more than 14 stops dynmaic range) it makes a difference to use ETTR to the max!
Isaac says
For example, Sony SLT-A35 with manual white balance set to 3900K and maximum G9 gives EXIF white balance coefficients:
Blue 1.378906
Red 1.359375
Chris Noble says
“To see if you have achieved the desired white balance, look at the red and blue white balance coefficients in the camera’s EXIF data. They should be within five or ten percent of one.”
Shooting a magenta image on your monitor to set a Custom WB and checking the Red & Blue Balances in the EXIF data and then adjusting the magenta image, I have found that you can get to 1% or better in a half-dozen iterations and a total of 30 minutes or so for the whole process (which only needs to be done once in the life of the camera).
Graham Byrnes says
Apologies for not letting a sleeping dog lie…. but I’m trying to educate myself about color theory and this brings up something I get hung up on.
Here’s my issue: if I’m using, say, Adobe RGB, which has a D65 illuminant. That is somewhere “in the middle” of the Adobe RGB triangle, or at (0.3127,0.3290,Y) in xyZ relative tri-stimulus values.
So why is uniWB so far from the centre of color codes returned by my camera? Looking at a typical winter daylight photo, I see WB diagonal scales of around (2.4, 1.0, 1.27). According to RawDigger, the XYZ-> camera matrix in my Pentax K3 is quite diagonal heavy, with some rotation between red & green. So if the above triplet represents the correct WB, that means the values coming out of the camera & camera matrix are the reciprocals of the above, viz (0.417,1.0,0.787), which map back to (0.068,1.18,0.67) off the sensor… in which case the Bayer red cells are damn near unemployed, and the combined camera matrix and WB function are having to ramp them way up to compensate for my unreasonable human enthusiasm for red light… most of which is in reality being measured by G1 & G2.
Did I get this right?
That would suggest that human vision, while it leans towards green, still has a big appetite for red. Which recasts the evolutionary question you relayed from DPR (“Why did we evolve to see green so well?”): why did we over saturate our sensitivity to the red side of green by doubly covering it with two sets of cones. Ability to see predators and prey against the leaves or grass?
I wonder what’s known about colour vision in other primates? I shall look 🙂
Thanks for the stimulus!
Iliah Borg says
Please consider this.
Camera space is non-colorimetric and non-neutral. Strictly speaking, it’s not even a colour space, because the “units” are different and camera space is not based on primaries. Raw camera output (and that’s the codes, returned by the camera) is composed of measurements of light intensities, governed by the light hitting the sensor and spectral sensitivities, and that is not colour. It’s voltage.
Part of the mapping from camera space to a neutral colour space is white balance.
JimK says
As Iliah said, you shouldn’t look at the camera primaries as being colorimetric. As to why they are weighted towards the green, I believe that is to increase sensitivity.
JimK says
The so-called “green” and “red” cones have greatly overlapping response spectra. That’s why color scientists tend to call them the “gamma” and “rho” cones, or the “M” and “L” cones.
Graham Byrnes says
Interesting: it seems that the M & L receptors were generated by a gene duplication…. which may be the pragmatic cause of their overlap. The S, otoh, is a modification of the UV sensitive form found in many mammals, ie the mouse, and depends on 7 distinct mutations that don’t have much effect when applied separately:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4270479/
JimK says
Another explanation is to deal with the chromatic aberration of the lens.
Marc says
Hello.
Wouldn’t it be possible for the camera manufacturers to give us an option to show the histogram based on the raw file instead of showing the histogram for the JPG?
Or, a metering option which determines the shutter speed or aperture or both, depending on which auto mode the camera is in if you are not exposing manually, so that the highest values (let’s) say 1% of the pixels, clip but 99% are exposed below the (raw) clipping threshold? Bundled with an option to configure this percentage.
Or, zebra blinkies based on raw values?
Or am I missing the point…
I think we all should email our preferred manufacturers.
Regards
JimK says
They could do all that. Except for Magic Lantern on Canons, they don’t.