There is an odd discrepancy between the SNR data for he D800E and the a7R. Here’s the low-midtone SNR vs ISO setting for the a7R, from a few posts back:
Here’s the same graph for the D800E:
Notice that the D800 SNR’s are a little over a half a stop worse than the a7R ones? What’s up with that? The chips are supposed to be very similar — some even say they’re the same — so they should yield close-to-identical results.
This discrepancy was driven home for me when I tried to compute the unity gain ISO for the a7R, using this methodology, and got this graph:
A unity gain ISO for this sensor of 1000 to 1200 is not reasonable. The D800 has a unity-gain ISO of about 320. This means that the full-well capacity is more then 180,000 electrons, half again as much as the D4, which has much larger sensels. That can’t be right.
I think that there’s some signal processing taking place in the a7R before the raw files are written than narrow the standard deviation of the test images, and make the camera look like it has a better SNR than it really does.
Is we look at the histograms of the green channels of a 200×200 pixel section of a raw file exposed about 3 stops down from full scale, we see curves that look Gaussian, although they are missing three-quarters of the buckets because of Sony’s tone compression algorithm.
The red channel of the same exposure has the right shape, but is only missing half the buckets because the level is lower:
At this point, I have no explanation for what’s going on here.