Warning: this is going to be a geeky, inside-baseball post. Unless you are interested in what goes on behind the curtain when models are fitted to data, I suggest you pass this one by.
In the previous post, I talked about using optimum-seeking methods to adjust the three parameters of the modeled camera – full well capacity, pre-amp read noise, and post-amp read noise – so that simulated performance of the model camera came as close as possible to matching the measured performance of the real camera.
I did this by combining four things:
- The data set of measured means and standard deviations.
- A camera simulator.
- A way to compare the modeled and the measured results, and derive a single real, positive number which gets smaller as the differences between the modeled and the measured results decreases, reaching zero if the two sets of results are identical. Let’s call this number the error.
- A computer program, called an optimum-seeking program, which manipulates the parameters of the camera simulator in such a way as to minimize the error.
I described the essential characteristics of the simulated camera in this post, and described the data set in this one. Now I’ll tell you about the other two.
The optimum seeking algorithm that I’m using is one that I’ve used with varying, but mostly good, success since 1970. In those days, I just called it the downhill simplex algorithm, but these days, allocating credit where credit is due, it’s usually called the Nelder–Mead method. It has several advantages, such as the ability to operate, albeit with some difficulty, with error functions whose derivative are discontinuous, and not needing the solution space to be scaled.
Like all optimum seeking programs of this class, it works best when there is only one local minimum. In many real-world problems, including this one, that is not the case. These are called polymodal problems. With these problems, the optimum seeking program tends to get hung up on a local minimum, not finding another local minimum that happens to be the global minimum. In the cameras that I’ve tested so far, it appears that simply picking a reasonable starting point is sufficient to allow the algorithm to converge to the global minimum.
The error function that I’m using is the sum of the squared error between measured and modeled standard deviation at each data point. Specifically, for every mean value in the measured data set, we compute the modeled standard deviation at the ISO associated with the mean, we subtract the model standard deviation from the measured standard deviation, square that value, and add it to the running sum.
There are often hard constraints in design problems. These introduce places where the multidimensional derivative of the function to be minimized is discontinuous. While the Nelder–Mead method deals fairly well with these discontinuities, I’ve chosen to avoid one whole set of them in the following manner (now things get really geeky).
One would think that you shouldn’t allow either pre-amp read noise post-amp read noise to have values below zero. So did I, at first. But because of the way that the two combine to yield total read noise, negative values for one or both work just fine. Here’s the basic formula for combining the two kinds of read noise.
RN = sqrt((preampRN * gain) ^ 2 + postampRN ^ 2)
Since the pre-amp and the post-amp terms both get squared, it doesn’t matter if they go negative. At the end of the calculation, if negative values come out as optimum ones, I simply change their sign.
Leave a Reply