• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / The Last Word / Read noise and quantizing, again

Read noise and quantizing, again

September 28, 2015 JimK 9 Comments

About a year ago I wrote a post on how read and quantizing noise interact in a digital camera. I concluded that, when the standard deviation of Gaussian read noise exceeded one-half the least-significant bit (LSB), the read noise provided sufficient dither that further increases in ADC precision would offer no increase in average digitizing accuracy.

In the ensuing time, some have proposed that greater amounts of read noise are necessary in order to avoid deleterious visual effects. One common statement is that 1.3 LSBs of noise are necessary. Some have offered reasons why my simple simulation gave over-optimistic results.

  • Visual impact of noise, averaged by the human eye-brain system, averages few image values. Put another way, the circle of confusion is smaller than the essentially-infinite one that I assumed.
  • The operation of demosicing generates chroma noise that is significant with more dithering than half an LSB.
  • Converting to a gamma of 2.2 or so emphasizes behavior near zero, and exposes visual errors that would otherwise be unnoticed.

I thought to test all those assertions. I wrote a Matlab script to make an image from a simulation where the average level increased from top to bottom, and the noise from left to right. Here is the result, for a demosaiced image with output gamma of 2.2, 2 bits of precision, and noise from 0 LSBs to 1 LSB progressing linearly from left to right.

noise grad 0 to 1

You can see the effects of the dither in reducing posterization. 0.5 LSB is in the center of the image, and it looks like the posterization is just about completely gone by then.

What if we look at read noise levels from zero to 2 LSBs:

noise grad 0 to 2

We can see that the posterization is gone about a quarter of the way into the image, but that the average levels continue to shift as more noise is added. That’s because adding more noise increases the number of clipped pixels, since values below zero are represented as zero, and values above full scale are represented as full scale.

In order to minimize this effect, I changed the precision to 3 bits, lifted the lower level of the average gray to 3 LSBs, and the upper lever of the average gray dropped to 5 LSBs, which, with a maximum amount of added noise of 1.5 LSBs, will mean that clipping occurs about a tenth of the time at the maximum noise (RH side) of the image.

Here’s the result, with the read noise varying from zero to 1.5:

3bit gamma 22

It’s a little hard to see what’s going on. Dropping the gamma to one helps:

3bit gamma 1

Now we can see that 0.5, or about one third of the way from left to right, seems to be adequate to reduce posterization. If you look hard at the lightest tones, you could convince yourself that it takes almost three-quarters of an LSB of noise to completely smooth them out.

As Jack Hogan pointed out when he saw these images, in a real camera there would be photon noise in addition to the read noise. That would reduce the amount of read noise you need for adequate dithering in the brighter tones.

The Last Word

← Cruising with the Sony a7RII – summary More on read noise and quantizing →

Comments

  1. CarVac says

    September 28, 2015 at 12:23 pm

    The real tough part of posterization is when a gradient is colored and you get hue shifts as the color channels step across bins individually.

    Can you redo this test with a color gradient, like maybe 0.6 0.8 1.0 relative values to the channels?

    You may need the vertical axis to cross more quantization boundaries so you can easily see both the intended average color and the local hue, though.

    Reply
    • CarVac says

      September 28, 2015 at 2:11 pm

      I did the above experiment myself with the colored gradient in Octave and came up with the same result as the monochromatic test in the article: when the standard deviation of the noise is equal to half the quantization step, that’s good enough to eliminate banding.

      Reply
      • Jim says

        September 28, 2015 at 2:26 pm

        I did it, too, and got the same answer. The images are interesting, though. I’ll post them.

        Jim

        Reply
  2. Toh says

    September 28, 2015 at 7:25 pm

    Hi Jim – This is the best demonstration of noise dither I’ve seen with modern camera relevance, thank you!

    I’m trying to get intuition on the tradeoffs here between spatial resolution and dynamic range and whether it depends on the situation at hand. Am I correct in believing that:

    – If one has a sufficiently high resolution sensor with no optical limits, it could be have just 1 stop of DR (i.e. on/off) and with enough noise dither, one could reconstruct a regular DR image from it with some digital low-pass filtering (though not very efficiently as it’d take a thousand pixels to produce 10 stops). In this example, a noise gaussian of 1 stdev = 1/2 max value would reasonably provide coverage. Would the ‘ideal’ noise function+amount be one where a signal value of X translates into a probability of being quantized as ‘1’ of X?

    – However, there must be times where we value spatial resolution more than DR. For example, if we know that there’s a clear 0.25->0.75 border, noise dithering creates uncertainty around where the border is?

    Reply
    • spider-mario says

      April 20, 2020 at 1:18 am

      Regarding your first point: from what I understand, in the audio world, DSD essentially works like that (it’s a 2.82 MHz signal at one bit per sample, as opposed to the 44.1 kHz @ 16 bits used by CDs).

      https://en.wikipedia.org/wiki/Direct_Stream_Digital

      Reply
      • JimK says

        April 20, 2020 at 6:40 am

        These DSD- like schemes are a form of delta modulation, which has been around in one form or another since the 1940s. Their best counterpart is cameras, to my knowledge, are photon counting regimes.

        Reply

Trackbacks

  1. Sub LSB Quantization | Strolls with my Dog says:
    December 13, 2015 at 9:55 am

    […] Interestingly, 0.5DN read noise also appears to be the threshold above which posterization of a smooth gradient  is no longer visible, as Jim Kasson shows in this excellent demonstration. […]

    Reply
  2. Dither, precision, and image detail | The Last Word says:
    April 14, 2016 at 3:35 pm

    […] Read noise and quantizing, again […]

    Reply
  3. Smooth Gradients and the Weber-Fechner Fraction | Strolls with my Dog says:
    April 8, 2017 at 6:19 am

    […] Jim Kasson has a couple of posts with images that nicely show the perceptual effect of dithering on otherwise posterized […]

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • bob lozano on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Goldilocks and the three flashes
  • DC Wedding Photographer on Goldilocks and the three flashes
  • Wedding Photographer in DC on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Fujifilm GFX 100S II precision
  • Renjie Zhu on Fujifilm GFX 100S II precision
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.