Comparing two a7II models

Yesterday, I reported on modeling the a7II at each ISO. Earlier, I modeled the camera over the same ISO range. The difference is that the first method yields a series of read noise numbers, one for each raw channel at each ISO, while the second method produces two read noise numbers for each channel, which can then be combined to give the read noise at each ISO.

I thought it would be instructive to see how the results from the two approaches compared.

Here’s a graph that converts everything to 12-bit raw values and averages all four channels. If you think the a7II is a 14-bit camera, multiply the vertical axis numbers by 4:


You can see that the curves are in substantial agreement except at the lowest ISOs, where the per-ISO method produces lower read noise than the all-ISO method.

This becomes even more obvious if we look at the percentage difference between the two method’s read noise:


My conclusion is that there’s some trick that Sony is playing to get better read noise performance at low ISOs than the standard model would allow.

What that trick is remains a mystery to me. However, I’m not going to lose any sleep over the matter, since none of the differences are enough to affect normal photography.

Modeling the a7II one ISO at a time

In the post before last, I modeled the Sony a7II over a range of in-camera ISO settings, and reported on the results. Today, I’ll show you what happens when you model the camera one ISO at a time. This is a much simpler task, aside from the fact that you have to do it over and over. It also tends to produce closer matches between the modeled and measured values.

But what it’s really good at is finding places where the camera does something not predicted in the standard model when you twist that ISO knob.

Let’s take a look at some results, first for read noise:


The read noise is reported in electrons before the amplifier, assuming that all the read noise is there (the post-amp read noise is zero).

One of the things this curve tells us is that there is really very little reason to turn the ISO knob on the a7II much past 1000. The read noise doesn’t drop appreciably after that, so you’re better off pushing in post. I’ll be posting the results of another test that deals with this subject later.

Now let’s see how the modeled full-well capacity (FWC) changes with ISO setting:


The short answer is that it doesn’t. However, there seems to be a very slight systematic upward slope until about ISO 1600. I don’t know what that’s about, and the effect is too small for me to care.

There’s another way to look at read noise, and that’s as the noise floor that limits dynamic range. This is called “Engineering Dynamic Range.” I happen to look at that name as a mild insult to engineers, but the term is ubiquitous, so I’ll use it. I prefer something I call “Photographic Dynamic Range”, which puts the bottom usable signal at about a SNR of 10.

But enough of that. Here’s the engineering dynamic range of the a7II as a function of ISO setting:


There are no jumps or lumps in the curve that would indicate some unusual chip design or raw processing wizardry.  There is a slight rounding of the curve at low ISOs. This is not unusual.

How well does the modeled data fit the measured data? Pretty darned well:




Comparing the a7II photon-transfer model to other cameras

Yesterday, I presented the result of fitting the a7II data to the standard photon-transfer model. Today I’ll compare those results to two other cameras, the Nikon D4 and D810.

Here are the numbers:


In the table above, we separate the read noise into two components, as described earlier. The first is the read noise on the sensor side of the amplifier whose gain is controlled by the ISO knob; that’s the pre-amp read noise, and its units are electrons.  The second is the read noise on the ADC side of the amplifier. I call that the post-amp read noise, and its units are ADC LSBs.  Let me explain that last unit a bit. Yes, the post-amp read noise is an analog quantity, and we could measure it in volts — actually microvolts — but that wouldn’t mean much to us as photographers. We care about how it makes the digitized result uncertain, and thus it is natural to measure it in the units that we see when we look at the raw files.

Correcting for the fact that the two Nikons are 14-bit cameras, and the Sony is, for the purpose of this discussion a 12=bit one:


All three cameras have full frame sensors, but the pixel pitch and therefor the resolution differ. To make the comparison apples and apples, I multiplied the FWC of the Sony by 24/16 = 1.5, and the FWC of the D810 by 36/16 = 2.25. I multiplied the two RN values of the Sony by sqrt(16/24), and the two RN values of the D810 by sqrt(16/36), to simulate the noise reduction you’d see with perfect down-sampling to the same pixel count.


The D810 stands out as a pretty remarkable camera. I didn’t consider the fact that the D810 gives away two-thirds of a stop of light-gathering power, since its base ISO is 64 and the other two cameras have a base ISO of 100. I don’t know how to factor that in. I’d love to get my hands on a D4s for a set of test images, but not enough to shell out the dough.

Modeling the a7II FWC and read noise for many ISOs

Now that I’ve taught DCRAW and my Matlab analysis program to be on the same page with the a7II files, I set about to find the read noise (RN) and the full well capacity (FWC) of the camera across a range of ISOs. I told the modeling program to consider all ISOs from 100 to 6400 in one-third stop steps as a group, and to throw away all samples where the green channel signal-to-noise ratio (SNR) was less than 2. I did the calcs separately for each raw channel. Here’s the answer:


There is much greater consistently across the channels than with the two Nikons I’ve tested so far, the D4 and D810. That makes me even more suspicious of the Nikon digital white balance scaling as the culprit in the case of those cameras.

In the table above, we separate the read noise into two components, as described earlier. The first is the read noise on the sensor side of the amplifier whose gain is controlled by the ISO knob; that’s the pre-amp read noise, and its units are electrons.  The second is the read noise on the ADC side of the amplifier. I call that the post-amp read noise, and its units are ADC LSBs.  Let me explain that last unit a bit. Yes, the post-amp read noise is an analog quantity, and we could measure it in volts — actually microvolts — but that wouldn’t mean much to us as photographers. We care about how it makes the digitized result uncertain, and thus it is natural to measure it in the units that we see when we look at the raw files.

You will note that the post-amp RN for the a7II is about one-fourth as high as  for the Nikon D4. That’s because the D4 is a 14-bit camera, and one LSB is a quarter of the Sony’s LSB, measured as a ratio to full scale. Measured in percent of full scale, the a7II and D4 post-amp RN are about the same.

How well do the above model parameters fit the measured data? Pretty well. Here’s the standard deviation of the measured (the blue dots) and modeled (the orange lines) standard deviation (sigma) versus the mean value (mu). Both are plotted as stops below full scale.


If you would prefer to see the same data in terms of DNR, I will oblige:


Although the model was fitted to all the ISOs in the range, I’ve only plotted the whole-stop ones to keep the graph understandable.

All but the lowest ISO modeled data in the darkest tones fit the measured data quite well.

If we look at the red channel sigma vs mu, here’s what we see:


And the red channel SNR vs mu:


Again, all but the lowest ISO modeled data in the darker tones fit the measured data well. The fact that the classical model fits the actual data well means that it is unlikely that the camera will need any special treatment to get the best out of it.



Sony a7ii PRNU

[Note: this post has been completely rewritten as of 12/19/14. The previous conclusions were in error because of discrepancies between the way that DCRAW unpacks the .ARW files fom this, and presumably other a7 cameras, and the way that the camera reports itself to RawDigger and other programs. To DCRAW, the a7II is a 12 bit camera with a black point of 128, not a 14-bit camera with a black point of 512. I’ll explain more in a future post.]

I’ve got a Sony alpha 7II to test. Rather than start off with the OOBE and my usual rant about the terrible menu system, I’m going to put the camera through some of the analyses that Jack Hogan and I have been developing.

Today, it’s photo response nonuniformity (PRNU). There are some surprises.

Here’s how the standard deviation of a 256-image flat-field exposure near clipping converges as the images are added in, with a high pass filter in place to mitigate lighting nonuniformity:


The overall PRNU is about 0.35%, and there’s no important difference between the channels.

Here’s the frequency response of the first green channel of the  unfiltered version of the averaged image:


The red channel:


And the blue channel:


A note about these spectra. Half the sampling frequency is on the right, and DC, or zero frequency, is on the left. The frequency scale is linear so that one octave is spread across the right half of the chart, and the next lower octave is from 1/4 to 1/2 way across the graph. The vertical axis is logarithmic, The blue horizontal frequency curve is excited by vertically oriented features in the image; and the red vertical frequency curve is excited by horizontal features in the image.

There are substantially more spiky places in the red and blue channels than in the green. Because the images were exposed under D55 illumination, the red and blue mean values are about a stop down from the green ones, and thus correction to full scale will multiply them by a larger number, but in the standard model for PRNU, that should all average out.

Here’s a histogram-equalized look at one of the green channels averaged and high pass filtered (with a 99×99 kernel) image:


There’s quite a bit of dust for a brand new camera — I just took it out of the box, mounted a lens, and made the exposures — but it certainly won’t interfere with photography, or even be visible without the super-aggressive equalization.

Here’s the blue channel:


In spite of the differences in the spectra, the visual differences aren’t striking.



Cleaning the data set based on SNR

In this post I talked about how some low-mean-value points in the data set can be truncated on the left of the histogram by in-camera firmware, and discussed a criterion based on signal-to-noise ratio (SNR) for leaving points out of the data set so that they don’t confuse the modeling of the camera-under-test’s read noise. Today I’d like to go into more detail about how I arrived at SNR = 2 as the criterion.

First off, what does SNR = 2 mean in terms of the histogram? It’s hard to say if it’s clipped (actually, if properly motivated, I could run a simulation) but if it’s not and the probability density function is Gaussian, having an SNR of 2 means that about 2 1/2 % of the distribution would be lopped off if the black point of the raw image were zero.

One of the things that is in the camera analysis software package is a camera simulator. It is a good idea to write a camera simulator whenever you write software to analyze camera output files, because it is a great way to test the program. As a side benefit, you can vary simulated camera parameters to test the consequences of analysis decisions, which is what I’ll report on here.

I set the program to simulate a camera with a full-well capacity of 100,000 electrons, a pre-amp read noise of 2 electrons, and a post-amp read noise of zero. I set it to have a raw file black point from 1 to 512 in powers of two. Then I ran the analysis program on the output, setting the data cleaning portion of the program to throw out data with SNR below a given value, varied that value systematically, and plotted the resulting modeled read noise.

For ISO 100:


The SNR used for the data cleaning is the horizontal axis, and the modeled RN in electrons is the vertical one. Since we know the right answer for the RN is 2 electrons, we want the threshold SNR that gives the modeled RN as close as possible to that number, which is in this case, an SNR of 2.

At ISO 800, which gives us more noise at the analog to digital convertor (ADC):


Now any threshold SNR below about 3 is OK.

At ISO 3200:


SNR’s of 2 and 3 seem to do well.

At ISO 6400:


Again, SNR’s of 2 or 3 look good.

Based on that, and other similar tests, I picked an minimum SNR of 2 for inclusion in the modeled data set.

A test for “ISO-less-ness”

Now that we have the data set described in this post, we can mine it in unconventional ways (as composed to the more-or-less standard look at the data presented yesterday). One thing that I’m usually interested in when I get a new camera is where I should stop turning up the ISO knob and just push in post.

I devised a method to take a look at that. You specify the highest ISO that you wish to consider, and what the mean signal level should be at that ISO as a ratio to full scale. For example, if you’re interested in an 18% mean, you’d specify it as 0.18. Then you tell the method what’s the lowest ISO you want to consider. The program picks a raw channel, finds the sample closest to the specified mean at the highest ISO and records the standard deviation and the SNR. Then it looks at the samples at the next ISO down from that in the data set, and finds the mean that corresponds to the same amount of light hitting the sensor, and records the standard deviation and the SNR. It keeps going until it reaches the lowest ISO of interest. It goes on to the next raw channel, and does the same thing, until it’s performed the calculations for all the raw channels.

The result is a graph like this:


Since the data is only exposed at 1/3 stop intervals, the found mean can be 1/6 of a stop away from the desired mean. If we assume that the noise is mostly photon noise, we can correct for that error:


It works pretty well at a mean of 18%, but, as you’ll see below, not so well for much darker tones.

If we pick the mean at ISO 5000 to be 1%, here’s what we see:


The “corrected” curve looks like this:


Not so good correction, huh? You’ll note that the SNR in all cases for a mean of 1% at ISO 5000 is below 3 stops, which is my personal limit for decent photographic quality.

What can we learn from those curves? It looks to me that turning the ISO knob on the D4 much past 400 doesn’t help much.

Since the D4 fits our model so well, we could get prettier curves by running the test on the modeled camera rather than the measured camera, but we can’t count on that.

Adrift in a sea of acronyms

A reader sent me a message this morning saying that he is confused by all the acronyms associated with the current set of posts, and asked me to post a glossary that he could keep open as he read the others.

Here goes:

ADC – analog to digital converter

CFA – color filter array

CSV – comma-separated vlaues

dB – decibel; one-tenth of a Bell.

DC – direct current; the zero frequency component in the frequency domain

DN – you won’t see that acronym here; I find it awkward and tautological. I use count or LSB instead, depending on the context

F – spatial frequency

Fs – spatial sampling frequency

FWC – full-well capacity

ISO – I don’t think this actually stands for anything anymore; Possibly it used to, in some language-dependent way, stand for the International Organization for Standardization. Maybe someone can straighten me out. When used to identify film speed, it meant the same thing as ASA, which did indeed stand for the American Standards Organization.

LSB – least significant bit

Mu – mean, or average value

PRNU – photo response nonuniformity

Post-amp RN – read noise that occurs after the camera gain stage

Pre-amp RN – read noise that occurs before the camera gain stage

RN – read noise

ROI – region of interest

Sigma – standard deviation; the square root of the variance

SNR – Signal to noise ratio

TIFF – tagged image file format

Camera modeling details

Warning: this is going to be a geeky, inside-baseball post. Unless you are interested in what goes on behind the curtain when models are fitted to data, I suggest you pass this one by.

In the previous post, I talked about using optimum-seeking methods to adjust the three parameters of the modeled camera – full well capacity, pre-amp read noise, and post-amp read noise – so that simulated performance of the model camera came as close as possible to matching the measured performance of the real camera.

I did this by combining four things:

  • The data set of measured means and standard deviations.
  • A camera simulator.
  • A way to compare the modeled and the measured results, and derive a single real, positive number which gets smaller as the differences between the modeled and the measured results decreases, reaching zero if the two sets of results are identical. Let’s call this number the error.
  • A computer program, called an optimum-seeking program, which manipulates the parameters of the camera simulator in such a way as to minimize the error.

I described the essential characteristics of the simulated camera in this post, and described the data set in this one. Now I’ll tell you about the other two.

The optimum seeking algorithm that I’m using is one that I’ve used with varying, but mostly good, success since 1970. In those days, I just called it the downhill simplex algorithm, but these days, allocating credit where credit is due, it’s usually called the Nelder–Mead method. It has several advantages, such as the ability to operate, albeit with some difficulty, with error functions whose derivative are discontinuous, and not needing the solution space to be scaled.

Like all optimum seeking programs of this class, it works best when there is only one local minimum. In many real-world problems, including this one, that is not the case. These are called polymodal problems. With these problems, the optimum seeking program tends to get hung up on a local minimum, not finding another local minimum that happens to be the global minimum. In the cameras that I’ve tested so far, it appears that simply picking a reasonable starting point is sufficient to allow the algorithm to converge to the global minimum.

The error function that I’m using is the sum of the squared error between measured and modeled standard deviation at each data point. Specifically, for every mean value in the measured data set, we compute the modeled standard deviation at the ISO associated with the mean, we subtract the model standard deviation from the measured standard deviation, square that value, and add it to the running sum.

There are often hard constraints in design problems. These introduce places where the multidimensional derivative of the function to be minimized is discontinuous. While the Nelder–Mead method deals fairly well with these discontinuities, I’ve chosen to avoid one whole set of them in the following manner (now things get really geeky).

One would think that you shouldn’t allow either pre-amp read noise post-amp read noise to have values below zero. So did I, at first. But because of the way that the two combine to yield total read noise, negative values for one or both work just fine. Here’s the basic formula for combining the two kinds of read noise.

RN = sqrt((preampRN * gain) ^ 2 + postampRN ^ 2)

Since the pre-amp and the post-amp terms both get squared, it doesn’t matter if they go negative. At the end of the calculation, if negative values come out as optimum ones, I simply change their sign.

Modeling ISO-induced variations in read noise

In yesterday’s post, I showed results of the photon transfer analysis programs modeling the Nikon D4’s full well capacity and read noise for each ISO setting of interest. Today, I’d like to show you what happens when you attempt something more challenging: modeling the behavior of the ISO adjustment knob as well.

To do this, we separate the read noise into two components, as described earlier. The first is the read noise on the sensor side of the amplifier whose gain is controlled by the ISO knob. I call that the pre-amp read noise, and its units are electrons.  The second is the read noise on the ADC side of the amplifier. I call that the post-amp read noise, and its units are ADC LSBs.  Let me explain that last unit a bit. Yes, the post-amp read noise is an analog quantity, and we could measure it in volts — actually microvolts — but that wouldn’t mean much to us as photographers. We care about how it makes the digitized result uncertain, and thus it is natural to measure it in the units that we see when we look at the raw files.

Once we’ve performed this mental and programmatic separation, we then tell the program to minimize the sum of the squared error between the measured and modeled cameras by adjusting the full well capacity, the pre-amp read noise, and the post-amp read noise.

If we turn the program loose on the D4 data, we get this:


The raw plane, or raw channel, order for the D4 is RGGB, so you can see that the program says that the red and blue channels have lower FWC than the two green channels, just as in yesterday’s results. I still am unclear about the reason for this, but I suspect that if has something to do with white balance prescaling.

Let’s look at the first green channel and see how well the modeled data matches the measured data. First we’ll look at standard deviation:


Then at SNR:


The match is quite good, indicating that the D4 doesn’t have much in the way of ISO-dependent  tricks up its sleeve.

Now the standard deviations in the red channel:


and the SNRs in that channel:


Note that the red channel is missing the upper stop or so of data. That’s because I used a D55 light source, and the red an blue channels are substantially down from the green ones with that input spectrum.

Photography meets digital computer technology. Photography wins — most of the time.

Entries RSS
Comments RSS