I saw this pronouncement on DPR today:
The smallest signal we can distinguish from the noise floor is equal to the noise. Signal to noise ratio (SNR) = 1.
You see people saying that, or things very much like that, all the time. In some circles, it’s conventional wisdom.
There’s only one problem: it isn’t even close to being true.
I’ve been meaning to post a demonstration for a while, and I spent the better part of half an hour coding up a program to make the images that I’m going to show you.
If you want to see the star better in the bottom image, try moving the image up and down with the scroll wheel on your mouse.
The noise is Gaussian, clipped at plus and minus three sigma. Mixing done in linear space. Images encoded for web at gamma 2.2.
You’ll notice that the parts of the star that are further away from the center are easier to distinguish when the noise gets high. That’s because there is more area for your eye to average over.
If we look at images with more spokes to the star, this is easier to see:
A reader asked what happens if we average 24 frames with the above parameters. Here is the result:
24 frames with those parameters average to:
If we take the above image and tighten up on the black and white points, we get this: