In the middle of a long discussion of the differences between the color information in raw files from the Sony a7RIII and a7RIV and whether there was a possibility of a firmware change (I said not, the person with whom I was conversing said yes) there was this statement:
i think when people like me refer to color science for raws, we are talking about what unedited raws look like in our developer of choice (with our settings) and how easy it is to massage that into something we like.
Conflating the camera and the raw development is a fundamental error here, albeit one that is rarely stated explicitly. I think many photographers unconsciously perform this pernicious elision. I believe that many of the debates that we see on the web such as:
- Whether Sony colors are better than Nikon colors
- Whether Canon colors are better than Sony colors
stem from misconceptions resulting from mentally combining two inherently disparate things. In past years, I’ve written in great detail about color reproduction in digital photography, and explained the path from the camera’s production of the raw file to what you see on the screen of your image editor, but there is so much information there that it puts off many people with only a casual interest in the subject. So, the confusion persists, and with it, endless arguments with no resolution. Hence, this more-focused post.
In a nutshell:
- Color is a psychological metric, not a physical one; it relates to how people perceive spectra
- The camera is responsible for the image data recorded in the raw file
- The data in the raw file is not color, but responses of the camera’s sensor to light
- The raw developer is responsible for turning that data into colors
- Both the camera and the raw developer influence the colors in the final image
- Of the two, the raw developer has the greater impact
- The part of the raw developer that most affects the colors in the final image is the color profile
If you accept the above bullet points, and are incurious about the whys and wherefores, you can stop reading right now. If some of the above seems unclear, or even wrongheaded, then I am providing some explanation and justification below. Color is a complex subject. I spent six years doing color science research for IBM, and there are still many things that the psychologists know that I don’t, and even many things the psychologists don’t know. It’s hard to boil this down, but I’ve tried to steer a course midway between covering all the details and showing you the math and simplifying to the point of error. If you find something in what follows confusing, just skip it and keep reading; I’ve tried to write summaries of the main points. If you want to ask a question, use the comment facility, and I’ll do my best to answer.
What’s in a raw file
The camera is responsible for creating the raw file, and the raw developer is tasked with taking that information and making an image from it. The raw file consists of three types of data.
- The raw image data
- A JPEG preview image
- Data about the camera, the conditions of the exposure, and the like. This is collectively called metadata.
Here’s a surprise: there are no colors in the raw image data, just the response of the camera to the light that fell on the sensor. A raw file is just a special kind of TIFF file, and you can look at the image data before it gets to the raw developer if you have the right tools. It will look sort of like a black-and-white version of what the camera saw, but with a checkerboard-like pattern superimposed.
Under most conditions, the raw developer ignores the JPEG preview image (Aside; ever wonder why there’s an underscore character in front of some raw files? If it’s there, it indicates that the preview image is in the 1998 Adobe RGB color space; otherwise it’s in sRGB.) The raw developer operates on the raw data using information in the metadata to produce the image you see in the raw developer.
By far, the greatest in-the-camera contributors to the final image color are the spectral characteristics of the three (it’s usually, but not always, three) dyes in the color filter array (CFA) together with the spectra filtering of the infrared filter (aka hot mirror) and the spectral response of the silicon itself. Fovean sensors are an outlier, and for them, the absorption characteristics of the silicon in the sensor replace the CFA dyes in determining the color information in the raw files.
The in-camera processing between the capture of the image on the sensor and the writing of the raw file has almost nothing to do with the color of the final image. Thankfully, most cameras don’t subtract out the black point before writing the raw file anymore, and visible noise differences away from the darkest shadows are almost entirely due to photon noise. Calibration and white-balance prescaling don’t affect color except to reduce sample variation.
So, the color information encoded in the raw files comes down almost entirely to the hardware.
If all hot mirrors were perfect, and all CFA filter spectra combined with the silicon frequency response were a linear combination of the responses of the human eye, there would be no difference at all in the color obtainable from various brands and models of cameras. Sadly, that condition – known to color aficionados as the Luther-Ives criterion – is met by precisely zero consumer cameras.
The raw developer
The raw developer turns the data encoded in the individual planes of the raw file into colors. In this case, color is a term of art, and represents a three-channel encoding that matches visual responses of a normal human. Spectra aren’t colors. Three plane recordings of the native sensor responses aren’t colors. Color is a psychological effect; all color-normal people should see the different spectra encoded as any given color as matching, assuming the viewing conditions are the same. This encoding breaks down a bit since the range of color-normal people can accommodate somewhat different responses. It breaks down a lot in the case of those who see color abnormally (we used to call them “color blind”). 8% of males, and about 1% of females fall into that category. Color management doesn’t work for those people. It is interesting to me that people who are commenting on color on the web hardly ever disclose their color vision normalcy. Since most of those people are men, it is possible that almost a tenth of the commenters are color-deficient.
There is information in the raw file metadata that raw developers can use to convert the image data to color images, but good raw converters ignore that information. Instead, they recognize the camera model, and apply precomputed algorithms and lookup tables to convert to color. The information that describes the differences between cameras and intents (more on intents later) is usually called the color profile. Usually, raw converters offer the user a choice among several raw profiles, and many give the photographer an opportunity to create and install their own profiles. Although not all color profiles are designed this way, I like to think of the color profile as having two components: calibration and intent.
Color profile camera calibration
Because cameras aren’t Luther-Ives devices, it is not possible to map all colors in the photographed scene under all lighting conditions to the same colors in the converted image. The objective of the calibration step is to come as close as possible. The classical way to do that is to generate something called a compromise matrix, multiply it by the data in the raw file, and generate an image in some representation that corresponds to the way most humans see color. The word to describe such an image is colorimetric. There are many colorimetric representations; each one is called a color space. Once an image is encoded in one such colorimetric space, it can be converted to any other by standard mathematical operations, with one significant limitation. Colors outside the range that can be encoded in the destination color space (the jargon is out-of-gamut colors) will be improperly represented.
In the interest of not oversimplifying, I’ve added some details in the above explanation that aren’t strictly necessary. The stripped-down version is camera calibration is an imperfect attempt to get the colors in the image to match the colors in the scene.
Color profile intent
Most photographers don’t want their images to have accurate color in them. They look flat that way, and skin tones look pallid. The second part of the color profile is used to get the “look” that pleases most people. Different distortions from accurate color seem to work best some circumstances and not in others. Different photographers prefer different color mappings. For these reasons, different profiles are supplied by most raw developer producers. Let’s take Adobe as an example. In Lightroom and Adobe Camera Raw (ACR), there are almost always the following Adobe Standard, Adobe Color, Adobe Portrait, Adobe Landscape, and Adobe Neutral. Adobe Standard is almost always the most accurate. Adobe Color is the most versatile, slightly amping up the colors in Standard to about the point where they are in Capture One’s default profile. Portrait and Landscape are the least accurate, and their purpose is self-explanatory. Neutral is a flat look which is a suitable starting point for many extensive manipulations by the user. For many cameras, Adobe also supplies profiles that start with “Camera”. I’m not sure how the negotiations are carried out, but these profiles represent camera manufacturer’s ideas for what people want. If you have a Fujifilm camera, you will probably see profiles that approximate the look of popular films of yesteryear. If that’s not enough for you, there are many third-party sources for color profiles. If that’s still not enough, you can make your own starting with a kit from XRite or someone else. You can also get software that will allow you to edit your own profiles. It’s enough to make the mind reel.
The relative impact of color profile intent and calibration
As users of profiles, we don’t get to separate the calibration and intent components. When we invoke a color profile, we get both. But we can tell which affects the result the most. The way to do that is to compare the results from color profiles from the same sources for the same cameras that have different intents (eg Sony a7RIV with Adobe Color, Adobe Standard, Adobe Portrait, Adobe Landscape, Adobe Vivid). They produce dramatically different results. Then compare the results from profiles with one intent from different cameras (eg Adobe Standard profiles for Sony a7x, a9x, and Nikon Zx cameras). You will find far greater variation among the former set than the latter one, which is evidence that the profile intent is the more important component.
Another way to get an idea of the residual errors from calibration is to make profiles for several different cameras using one of the profile-making software packages, then test the accuracy of those profiles. You will see much less variation among the results than when comparing canned profiles with different intents from different sources.
Some of the color differences are in the camera
It’s not all in the raw developer. As an example, let’s imagine that a Sony a7RIV sees two spectra that resolve to different colors as the same. No profile will be able to tell which of those spectra produced which set of values in the raw file, and the two different colors will look like the same color in the final image. Now let’s imagine that a Nikon Z7 sees two other different-color spectra as the same, but the Sony sees them as different. The Sony and the Nikon cameras will not produce the same colors from a scene containing the spectra above.