This is the sixth in a series of posts on color reproduction. The series starts here.
In my last post, I suggested a “perfect world” approach to generating compromise matrices. This approach depended on the ability to generate targets with patches of arbitrary spectral reflectances. This is currently not practical. Bummer. Forget about it, right? Maybe not.
While it’s not practical to actually generate the targets we seek, it is not only practical, but not that difficult, to simulate them in a computer. It’s also easy to “light” the simulated targets with arbitrary spectra, meaning that we can test any illuminant for which we can obtain a spectrum, and make up new illuminants as we wish.
Here comes the fly in the ointment. We have to simulate the capture of the target by the camera. For that we need to know the transmission spectra of the following:
- The lens
- The sensor stack
- Each of the color filter array channels
We also need the sensor spectral response.
If we have all those things, simulating the camera’s response to the target becomes dead-nuts easy (an engineering expression of uncertain origin indicating less than a week’s intense effort with a high probability of success).
If you’re a camera manufacturer, you have all the information you need, and it should be pretty simple to compute compromise matrices. However, they will be with your choice of
- Illuminant(s)
- Subjects
- Luminance, hue and chroma weighting
- Subject and illuminant importance weighting
- Nonlinearities in generating the error function (absolute value, square, etc)
At least you won’t have to worry about what to do about outliers due to measurement errors, since there won’t be any of those.
What about the rest of us? Are we SOL?
Maybe not. If we had access to a calibrated monochromator, we could put a lens on the test camera, aim it at a constant (or known) reflectance target, illuminate the target with the light passed through the monochromator, take a picture every 5 nm from 380 to 720 nm, look at the raw values, and know the combined spectral response of the lens, sensor stack, CFA, and sensor, even though we wouldn’t know any of these spectra in isolation.
I don’t have the equipment to make the required measurements. Does anyone reading this?
CarVac says
Sounds like nobody does…
If I were still in school my Physics major friend would be able to do this.
Jack Hogan says
Hi Jim, one of your current posts pointed to this excellent series which I had apparently missed at the time , probably because until recently color discussions made my eyes glaze over:-)
“I don’t have the equipment to make the required measurements. Does anyone reading this?”
I came across the Xrite Color Munki Photo (now marketed under the i1Studio moniker after a recent facelift – apparently the hardware is unchanged) about a year ago and I am tickled pink with it. I use it to derive compromise matrices with good success, as described on ‘Strolls’. When controlled by open source ArgyllCMS on a laptop or smartphone it is a brilliant little spectrophotometer able to measure emission/transmittance/reflectance and projected spectra with decent accuracy and precision. So illuminant and reflectance spectra of any photographic target. Argyll even has a high-rez mode for it with data produced every 3.33nm from 400 to 730nm, which I use.
It’s main limitation compared to its bigger and more expensive brothers appears to be the presence of a UV cut filter that limits the lower end to 400nm, something that does not bother me ’cause my camera apparently has one of those too 🙂
Jack