Warning: this post assumes at least a nodding familiarity with the Zone System, as described in Ansel Adams’ The Negative (do yourself a favor and get at least the 1981 edition), and in many other places.
In the previous post, I explored one aspect of applying the Zone System to digital cameras. I concluded that:
- So doing would produce clipped highlights in many scenes, since digital cameras, unlike negative film, have no shoulder.
- A very limiting workaround is to consider digital cameras as having N+2 development baked into them.
- Another workaround with high DR scenes is deliberate underexposure from the Zone System exposure.
Now I’d like to consider some other problems.
Color
The Zone System was developed for black and white negative film and paper. It depends on the light meter having the same spectral sensitivity as the film. Most light meters, then and now, are designed to emulate the photopic human visual response. Most black and white films do not. Fred Picker used to sell a filter for the Pentax 1-degree spot meter than corrected its spectral response so that it approximated that of one of the Eastman Kodak Tri-X emulsions. Of course, it couldn’t do this without stopping some photons, and the film speed had to be adjusted by the photographer, but it was a reasonable workaround. As I remember, it did not receive anything approaching overwhelming acceptance.
The problem of applying the Zone System to color films was that each layer had its own H&D curve, and thus there were three sets of Zones. A further difficulty was that the three curves did not change the same way with changes in development time, so that color shifts were inevitable if anything but N development was used. Also, the time for N+1 or N-1 development would be different for each layer. That was a real problem since each layer got the same development time.
It was, to use a technical term, a mess.
With a few exceptions — few enough that I think I can count them on one hand — all the current consumer digital cameras you can buy are color cameras. Also, you can’t change the digital equivalent to the film development; it’s baked into the sensor design (what we call development in digital photography is more analogous to printing in film photography than the development of the negative). So now you have to deal with the fact that the spectrum of your illuminant and the reflectance spectra of your subject affect the four raw planes differently, with a light meter that lumps all that together in some more-or-less-obscure manner. It still is a mess, unless you’ve calibrated your light meter to the illuminant and most-important part of the subject.
As far as I know, there is no effective workaround for this. You can use UniWB and the color histogram, but that’s miles away from the Zone System.
Clipping
The Zone System was formulated for B&W negative film, which clips in the shadow region. If not enough light hits the film, the density of the negative is film base density plus fog (aka Dmin), and there is no detail. There is a small non-linearity near Dmin, but it doesn’t help much. In modern parlance, we’d say the image is clipped. As more and more light hits the film, the density response becomes nonlinear, rolling of very gradually in a part of the H&D (Density vs log exposure) curve called the shoulder. So the consequences of overexposure are nowhere near as pernicious as those or underexposure, mostly an additional loss in contrast in the highlights (and as we saw in the previous post, we’re already counting on some of that).
In a CMOS digital camera, the gain is almost always adjusted so that the analog-to-digital converter (ADC) reaches full scale before the photo-diode becomes significantly nonlinear. Thus there is no shoulder. As exposure increases, the values in hte raw file increase linearly until they get to full scale, and anything brighter than that is rendered as full scale, with zero detail. On the dark end, the response to the light gets smaller and smaller linearly as the exposure gets lower. At the same time, the signal to noise ratio degrades do to a combination of photon noise and read noise. Eventually, when the exposure gets small enough, there is nothing there but noise. This is actually similar to what happens with B&W negative film, with two big differences. First, the dynamic range as measured from the top of the linear region to the noise floor is larger (at base ISO, for sure, and probably at all other ISOs, too) in a modern digital camera than in film. Second, because the digital camera is so much more sensitive to light than most all film, the photon noise in shadow regions is more noticeable with digital than with film cameras. In fact, most film photographers don’t ever consider photon noise.
The net effect of the above is that in B&W negative (and color negative) film photography, you want to guard against underexposure, while in digital photography you want to avoid overexposure. In fact, among some (non Zone System) film photographers, mild overexposure is called “generous” exposure.
The Zone System is a quantitative formulation of the nineteenth-century adage, “Expose for the shadows and develop for the highlights”. An equivalent 21st-century digital saying would be “Expose for the highlights, process for the shadows.”
Availability of exposure tools
When Adams invented the Zone System, the only tools available for calculating exosure were lightmeters and tables of common lighting conditions and their brightness. Adams wisely chose the exposure meter as his main tool, falling back on tables (in his case, memorized ones) in emergencies. The Zone System provides an rapid, elegant, serviceable approximation to a perfect sensitometric ideal. Unfortunately, the system couldn’t calibrate out lens transmission, aperture and shutter errors, bellows extension variations, reciprocity failure, and many other variables; they all had to be dealt with as modifications to the Zone-System-derived exposures.
We have two-and-a-half better exposure tools now, and since they are camera-based, they automatically compensate for many of the effects above.
Zebras. Only available in live view to DSLR shooters, and configurable to be always present in the EVF on a mirrorless camera, The zebras provide a visual indication of the presence of possible highlight clipping, and identify the part of the scene where that might occur. The zebras come from the JPEG preview image, so they are affected by the white balance of the camera. The most accurate approximation to a indication of raw file clipping is obtained by using UniWB, which is described elsewhere in this blog. Unfortunately, they indicate only luminance clipping, but for many subjects in most lighting, that amounts to green-channel clipping, which is the most likely channel to saturate.
Histograms. Also derived from the JPEG preview or finder image, and therefore subject to the same white balance limitations, the histograms provide a more-detailed look at the tones in an image, but do not indicate where in the image those tones occur. For most photographers, this should be the go-to exposure tool. Especially in conjunction with zebras, it provides a level of precision and accuracy that can’t practically be obtained with a handheld exposure meter.
In-camera Exposure Meters. This is the half-new tool. It’s the old exposure meter, but in the camera, where all the things that affect the amount of light falling on the sensor are calibrated out. You can make it a spot meter, an averaging one, or a smart matrix meter. When you want a good, but perhaps not optimal, exposure, and you’re in a hurry, this is your tool.
Alan Dang says
Any thoughts on 16-bit CCDs in the context of this zone system. I have historically thought to myself that if the read noise was high enough that extra bits are useless.
But in the context of zone…
Imagine in a 14-bit system I had values of 100 and then 102 ADU. 2 electron CMOS read noise. So 98-102 to 100-104. Delta of -2 to 6.
Imagine I had a 16-bit system with 8 electron read noise. 400 and then 405. ADU. (Instead of 408…this is because it really should be just slightly higher than 101ADU in 14-bit ). 392-408 to 397-413. Delta of -11 to 21. But this spread of 30 is <8 (32) if it were 14-bit ADC.
Now imagine if that 16-bit has less full well capacity.
Since the 16-bit can capture has a lot less full well and “equal” read noise, it has worse dynamic range. That said the ability to capture some value in between 1 or 2 bits in 14-but could translate in more fine details (“tonality”?) in the absence of high dynamic range?
JimK says
Once you get to about 1 LSB of dither, adding more precision and leaving the noise the same doesn’t do much:
Take a look at these:
https://blog.kasson.com/the-last-word/how-read-and-quantizing-noise-interact/
https://blog.kasson.com/the-last-word/read-noise-and-quantizing-again/
https://blog.kasson.com/the-last-word/more-on-read-noise-and-quantizing/
https://blog.kasson.com/the-last-word/dither-and-image-detail-ahd/
https://blog.kasson.com/the-last-word/dither-and-image-detail-low-contrast/
https://blog.kasson.com/the-last-word/dither-and-image-detail-natural-scene/
Alan Dang says
Great resources!
I did a quick experiment with the 4-bit and 5-bit images from the low contrast scene.
I took the JPEGs into Lightroom. +100 Color NR (0 luminance NR). +100 Contrast
When I do that the 4bit quantization w/1-bit noise looks noisier than 5bit quant with 2-bit noise. Is that simply an artifact of the randomization seed?
JimK says
I don’t think so. There appears to be some small improvement above 1 LSB dither. I have seen people say that 1.6 LSB is the end of that, but AFAIK, the tests are subjective, and I haven’t had the resources (or, frankly, the will) to run them.
Jean Pierre says
Technically it is possible to have a “correct” exposure. But, in real, all manufacturer want to have their own measure system and want to have the “old” version too for the people who came from negative system.
I do not know, why the manufacturer do not want to implement a correct measurement software!
Do they not want to, or are lazy to do it.
For RAW-shooter it is not so a great problem, can be fixed in post processing. But, for JPEG-shooter it is not easy to have a correct exposure measured from the camera software!!
Maybe, in the future….. or never…..
JimK says
It occurs to me that there is another reason, in addition to film’s toe and shoulder characteristics, why AA’s Zone values don’t work in modern digital cameras: veiling flare (if you’re a photographer), aka veiling glare (if you’re a lens designer), which serves to compress the dynamic range of the image field. The lenses of AA’s time had a lot more veiling flare than the modern multicoated lenses of today, so Zone I and Zone X were closer together when the light hit the film than they are now. This would also serve to emphasize the toe part of the H&D curve.