In the middle of a long discussion of the differences between the color information in raw files from the Sony a7RIII and a7RIV and whether there was a possibility of a firmware change (I said not, the person with whom I was conversing said yes) there was this statement:
i think when people like me refer to color science for raws, we are talking about what unedited raws look like in our developer of choice (with our settings) and how easy it is to massage that into something we like.
Conflating the camera and the raw development is a fundamental error here, albeit one that is rarely stated explicitly. I think many photographers unconsciously perform this pernicious elision. I believe that many of the debates that we see on the web such as:
- Whether Sony colors are better than Nikon colors
- Whether Canon colors are better than Sony colors
stem from misconceptions resulting from mentally combining two inherently disparate things. In past years, I’ve written in great detail about color reproduction in digital photography, and explained the path from the camera’s production of the raw file to what you see on the screen of your image editor, but there is so much information there that it puts off many people with only a casual interest in the subject. So, the confusion persists, and with it, endless arguments with no resolution. Hence, this more-focused post.
In a nutshell:
- Color is a psychological metric, not a physical one; it relates to how people perceive spectra
- The camera is responsible for the image data recorded in the raw file
- The data in the raw file is not color, but responses of the camera’s sensor to light
- The raw developer is responsible for turning that data into colors
- Both the camera and the raw developer influence the colors in the final image
- Of the two, the raw developer has the greater impact
- The part of the raw developer that most affects the colors in the final image is the color profile
If you accept the above bullet points, and are incurious about the whys and wherefores, you can stop reading right now. If some of the above seems unclear, or even wrongheaded, then I am providing some explanation and justification below. Color is a complex subject. I spent six years doing color science research for IBM, and there are still many things that the psychologists know that I don’t, and even many things the psychologists don’t know. It’s hard to boil this down, but I’ve tried to steer a course midway between covering all the details and showing you the math and simplifying to the point of error. If you find something in what follows confusing, just skip it and keep reading; I’ve tried to write summaries of the main points. If you want to ask a question, use the comment facility, and I’ll do my best to answer.
What’s in a raw file
The camera is responsible for creating the raw file, and the raw developer is tasked with taking that information and making an image from it. The raw file consists of three types of data.
- The raw image data
- A JPEG preview image
- Data about the camera, the conditions of the exposure, and the like. This is collectively called metadata.
Here’s a surprise: there are no colors in the raw image data, just the response of the camera to the light that fell on the sensor. A raw file is just a special kind of TIFF file, and you can look at the image data before it gets to the raw developer if you have the right tools. It will look sort of like a black-and-white version of what the camera saw, but with a checkerboard-like pattern superimposed.
Under most conditions, the raw developer ignores the JPEG preview image (Aside; ever wonder why there’s an underscore character in front of some raw files? If it’s there, it indicates that the preview image is in the 1998 Adobe RGB color space; otherwise it’s in sRGB.) The raw developer operates on the raw data using information in the metadata to produce the image you see in the raw developer.
The camera
By far, the greatest in-the-camera contributors to the final image color are the spectral characteristics of the three (it’s usually, but not always, three) dyes in the color filter array (CFA) together with the spectra filtering of the infrared filter (aka hot mirror) and the spectral response of the silicon itself. Fovean sensors are an outlier, and for them, the absorption characteristics of the silicon in the sensor replace the CFA dyes in determining the color information in the raw files.
The in-camera processing between the capture of the image on the sensor and the writing of the raw file has almost nothing to do with the color of the final image. Thankfully, most cameras don’t subtract out the black point before writing the raw file anymore, and visible noise differences away from the darkest shadows are almost entirely due to photon noise. Calibration and white-balance prescaling don’t affect color except to reduce sample variation.
So, the color information encoded in the raw files comes down almost entirely to the hardware.
If all hot mirrors were perfect, and all CFA filter spectra combined with the silicon frequency response were a linear combination of the responses of the human eye, there would be no difference at all in the color obtainable from various brands and models of cameras. Sadly, that condition – known to color aficionados as the Luther-Ives criterion – is met by precisely zero consumer cameras.
The raw developer
The raw developer turns the data encoded in the individual planes of the raw file into colors. In this case, color is a term of art, and represents a three-channel encoding that matches visual responses of a normal human. Spectra aren’t colors. Three plane recordings of the native sensor responses aren’t colors. Color is a psychological effect; all color-normal people should see the different spectra encoded as any given color as matching, assuming the viewing conditions are the same. This encoding breaks down a bit since the range of color-normal people can accommodate somewhat different responses. It breaks down a lot in the case of those who see color abnormally (we used to call them “color blind”). 8% of males, and about 1% of females fall into that category. Color management doesn’t work for those people. It is interesting to me that people who are commenting on color on the web hardly ever disclose their color vision normalcy. Since most of those people are men, it is possible that almost a tenth of the commenters are color-deficient.
There is information in the raw file metadata that raw developers can use to convert the image data to color images, but good raw converters ignore that information. Instead, they recognize the camera model, and apply precomputed algorithms and lookup tables to convert to color. The information that describes the differences between cameras and intents (more on intents later) is usually called the color profile. Usually, raw converters offer the user a choice among several raw profiles, and many give the photographer an opportunity to create and install their own profiles. Although not all color profiles are designed this way, I like to think of the color profile as having two components: calibration and intent.
Color profile camera calibration
Because cameras aren’t Luther-Ives devices, it is not possible to map all colors in the photographed scene under all lighting conditions to the same colors in the converted image. The objective of the calibration step is to come as close as possible. The classical way to do that is to generate something called a compromise matrix, multiply it by the data in the raw file, and generate an image in some representation that corresponds to the way most humans see color. The word to describe such an image is colorimetric. There are many colorimetric representations; each one is called a color space. Once an image is encoded in one such colorimetric space, it can be converted to any other by standard mathematical operations, with one significant limitation. Colors outside the range that can be encoded in the destination color space (the jargon is out-of-gamut colors) will be improperly represented.
In the interest of not oversimplifying, I’ve added some details in the above explanation that aren’t strictly necessary. The stripped-down version is camera calibration is an imperfect attempt to get the colors in the image to match the colors in the scene.
Color profile intent
Most photographers don’t want their images to have accurate color in them. They look flat that way, and skin tones look pallid. The second part of the color profile is used to get the “look” that pleases most people. Different distortions from accurate color seem to work best some circumstances and not in others. Different photographers prefer different color mappings. For these reasons, different profiles are supplied by most raw developer producers. Let’s take Adobe as an example. In Lightroom and Adobe Camera Raw (ACR), there are almost always the following Adobe Standard, Adobe Color, Adobe Portrait, Adobe Landscape, and Adobe Neutral. Adobe Standard is almost always the most accurate. Adobe Color is the most versatile, slightly amping up the colors in Standard to about the point where they are in Capture One’s default profile. Portrait and Landscape are the least accurate, and their purpose is self-explanatory. Neutral is a flat look which is a suitable starting point for many extensive manipulations by the user. For many cameras, Adobe also supplies profiles that start with “Camera”. I’m not sure how the negotiations are carried out, but these profiles represent camera manufacturer’s ideas for what people want. If you have a Fujifilm camera, you will probably see profiles that approximate the look of popular films of yesteryear. If that’s not enough for you, there are many third-party sources for color profiles. If that’s still not enough, you can make your own starting with a kit from XRite or someone else. You can also get software that will allow you to edit your own profiles. It’s enough to make the mind reel.
The relative impact of color profile intent and calibration
As users of profiles, we don’t get to separate the calibration and intent components. When we invoke a color profile, we get both. But we can tell which affects the result the most. The way to do that is to compare the results from color profiles from the same sources for the same cameras that have different intents (eg Sony a7RIV with Adobe Color, Adobe Standard, Adobe Portrait, Adobe Landscape, Adobe Vivid). They produce dramatically different results. Then compare the results from profiles with one intent from different cameras (eg Adobe Standard profiles for Sony a7x, a9x, and Nikon Zx cameras). You will find far greater variation among the former set than the latter one, which is evidence that the profile intent is the more important component.
Another way to get an idea of the residual errors from calibration is to make profiles for several different cameras using one of the profile-making software packages, then test the accuracy of those profiles. You will see much less variation among the results than when comparing canned profiles with different intents from different sources.
Some of the color differences are in the camera
It’s not all in the raw developer. As an example, let’s imagine that a Sony a7RIV sees two spectra that resolve to different colors as the same. No profile will be able to tell which of those spectra produced which set of values in the raw file, and the two different colors will look like the same color in the final image. Now let’s imagine that a Nikon Z7 sees two other different-color spectra as the same, but the Sony sees them as different. The Sony and the Nikon cameras will not produce the same colors from a scene containing the spectra above.
CarVac says
The one exception to this is when a camera outputs its own DNG with the color profile specified. Then you should expect much more consistent results independent of the raw processor, unless it chooses to ignore the embedded profile/matrix by default.
Ilya Zakharevich says
Thanks for a great exposition! However, there was one place which did ring very wrong WHEN I READ IT:
• The data in the raw file is not colors, but responses of the camera’s sensor to light
• The raw developer is responsible for turning that data into ⟨colors⟩
It is the braced word “⟨colors⟩” which sounds VERY wrong — when it is in such a precise context. And later you indeed explain that these “⟨colors⟩” are not COLORS, but were used above essentially as “a figure of speech”.
I would very much prefer if instead of “colors” you would write something like
“colors” (we would clarify this later).
JimK says
I added a prefatory bullet:
Color is a psychological metric, not a physical one; it relates to how people perceive spectra
Does that help?
Ilya Zakharevich says
“Since most of those people are men, it is possible that almost a tenth of the commenters are color-deficient.”
In my (very limited) experience: when I wrote a paper which used color-coding as a way to present information, I found out that more than 1/3 of my test male readers were color-blind. Apparently, (like lefties) color-blind people seem to be very over-represented in scientific circles…
This may be applicable to your audience too…
Zé De Boni says
Quite a colorful picture of the color reproduction / perception process! I see it as a simple concept with a complex amount of variables. It is really hard to list all of them in just one page. Surely you could write a thick book on this subject, this is just a summary.
Let me just add one important ingredient in this cooking pan: the presentation medium. I mean the viewing screen or printing paper. Both play important roles in the way we perceive and evaluate the results. And although there are tools to calibrate the former and profiles for the later, I calibrate my monitor visually to mimic the printed result. Then I rely on print tests to achieve the result that, rather than fitting my taste (or technical wisdom), may impact the viewers as I wish.
That is absolutely psychological and subjective.
Jack Hogan says
Good one Jim.
Nate says
Thanks for the clear explanation Jim. If digital photography is ever going to emerge from it’s aesthetic infancy this is something that photographers desperately need to understand! Steve Yedlin (http://www.yedlin.net/OnColorScience/index.html ) is doing his best to help the cinema world move past the belief that the camera is the most important part of the “look” of your footage and to open the door to fully harnessing the power to shape and direct your color; we desperately need similar voices in the still photography world!
Something I’ve been wondering is if you use an app like Raw Photo Processor to debayer and save a non-color managed, untagged TIF (using RPP’s Raw 16 Bit TIFF mode which doesn’t apply any raw profile), would the image technically be in the camera’s “native” color space, allowing you to directly compare the characteristics of one camera’s native, unprocessed, color response to another’s? Is this the image that raw profiling applications (like DcamProf/Lumariver) use to create RAW profiles? And, finally, do manufacturers use any processing or metadata to influence/manipulate the “color” of this raw image or is it really objectively raw luminance data as seen by the sensor?
Thanks!
JimK says
Yes. You can also use dcraw, libraw, or RawDigger to export the mosaiced data. In dcraw, it’s called “document” mode.
JimK says
It’s pretty close to raw. Hasselblad may be an exception, as they do a lot of calibration on their cameras. There is white balance presecaling with Nikons, and some other prescaling with Sonys. Sony does some vignetting correction before it writes some raws.
Michael Klein says
“Sony does some vignetting correction before it writes some raws.” It rather seems to be correction for sensor shading and I wish they wouldn’t do it.
Christian says
you only need to turn it of it is a source of banding issues anyway
JimK says
There are some situations where you can’t turn all of it off.
Michael Klein says
Lens shading correction can be turned “off”. The other correction can’t for as long as the camera “recognizes” a lens (it is not “on” for vintage lenses, for example). If you cover the lens contacts, it is also “off” for native lenses.
FredD says
Is the CFA of consumer cameras fully stable (for all practical purposes), will it behave essentially identical after 5, 10, 15, or 20 years of camera use as when new? (Even if we keep the same camera, and do repeat photography, for most of us it isn’t under laboratory conditions, nor with calibration to standards, so we wouldn’t know if it deviates by a modest amount). If there is any instability, I’d expect it to manifest earlier in mirrorless cameras, which have their sensors exposed more than DSLRs. (And even more-so if that mirrorless-camera photographer is either a mad dog or an Englishman!)
Also, any thoughts on the benefits of CFAs more complex than the three colors of Bayer? It seems that as sensor resolution increases, there is room for a greater number of each more-narrow bands in the CFA. Not as elaborate as multispectral remote sensing, but still more complex than what we have now with the three colors of Bayer. (HP used to make a flat-bed scanner with 6 colors, IIRC. No experience with it, so I don’t know if there was a worthwhile gain). More colors would be more computationally intensive, though.
JimK says
I don’t know, but i don’t think so. Dyes fade, even when they’re not exposed to light.
Yes. See this:
https://blog.kasson.com/the-last-word/how-cameras-and-people-see-color/
Dave says
I believe pigments dominate CFAs today and are very stable.
Fran says
Hello from Spain, I don’t know if you will read this as this post is from 2019.. First, I want to say that I am a wedding photographer and I have not studied photography not even color science, but I love to read and learn from experts and I have study my self a lot reading a lot. However some things written here for me are still difficult to understand.
I would like to ask you something. I have heard from many good wedding photographers, that WB affects to RAW as they see differences in editing and they cant achieve the same color if same photo is lets say at 4500kelvin and other 7000kelvin. I have read in other places, as I also infer from what I read here, that RAW data is not altered by WB as WB is in metadata. Do you find an explanation as to why this theories exists? And I would like to ask you something else. I use Nikon d750m and shoot in Camera Standard profile, do you recommend me using Adobe Standard? What color profile will affect less to raw data in order to have more information to edit this photo?
Thanks for your time.
JimK says
“I have read in other places, as I also infer from what I read here, that RAW data is not altered by WB as WB is in metadata.”
That is correct.
Barry C says
In my efforts to better understand this, a thought experiment:-
Suppose we have two separate scenes which just happen to be identical (like I say, it is a thought experiment!). So the physical properties of the light reflecting off of any one point, are identical in both scenes. Same electromagnetic frequencies in the visible light spectrum, at the same intensities. So any one person would perceive the scenes to be exactly the same, because the physical light data presented to their eyes would be identical from both scenes.
And the notion of the scenes being under different lighting is a non-issue in this thought experiment, because we are saying the light reflecting off of it is identical in both cases.
And the issue of people having different colour perceptions is also not an issue, because this thought experiment is about the scene data that arrives at someone’s eyes, before it has even entered their vision system at all.
So now to take this thought experiment a bit further. Suppose one of the scenes is a real scene, like a bit of landscape, whatever. And suppose the other scene is in fact a replicated image of it, but in our perfect thought experiment here, is a 100% faithful rendition of it, each point in it reflecting the exact same light wavelengths and intensities as the original scene. So again for any one person, their colour perception of both scenes – the original and the replication – would still have to be identical surely, because the physical properties of the light being fed to their eyes is identical in both cases.
Now I appreciate that 100% matching of light frequencies and intensities is impossible, between original scene and rendition of it, but how close can I get to it. Surely with modern technology I should be able to get pretty close?
So back to the real world. If we are only talking about one observer – me in this case – then it feels like I should be able rely on my own colour perceptions to assess if the rendered image is close to what I perceive in the original. If the physical light properties from real scene and rendered image are close then my colour perceptions would also be close surely?
This comment is prompted by the fact that on a path I often walk, there are some beautiful grasses, with wonderfully subtle variations in yellows, greens and browns, amidst green bushes and with bright yellow flowers amongst it. It always makes me stop and look at the beauty and subtlety of it, and I never can reproduce that in a photo.
My camera is a Panasonic FZ1000 II. Only recently started shooting in RAW. So far only used Panasonic’s image processing software, SILKYPIX Developer Studio 8 SE, as not wanting to spend a lot of money on something I may then find doesn’t suit me.
JimK says
That would require a spectrophotometer at every pixel in the sensor, and an arbitrary spectrum generator at every pixel in the display device. Extremely expensive for decent resolution, and not remotely practical for consumer usage.
JimK says
If the reproduction is spectral, then the only thing that would make the colors appear different to you is your state of adaptation.
JimK says
You are describing what Hunt calls spectral reproduction.
https://blog.kasson.com/the-last-word/the-color-reproduction-problem/