• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / The Last Word / More slit scan experiments — ameliorating shadow noise

More slit scan experiments — ameliorating shadow noise

November 25, 2016 JimK Leave a Comment

This post is part of a series about some experiments I’m doing combining space and time in slit scan photographs. The series starts here.

Because using a raw workflow is too cumbersome with tens of thousands of images per setup, I’ve been using Adobe RGB JPEG images. There’s an issue with those if you push the shadows at lot, since the precision in the dark areas, with a gamma of 2.2 and 8-bit precision, is marginal. 

Thanks to the prompting of a reader, I’ve come up with a way to improve the shadow noise.

The trick is to make the simulated slit wider, while advancing it at the same rate. It’ll probably be clearer with some numbers. Let’s say the slit is 2 pixels wide. The way I was doing things before, I was taking a 2-pixel-wide slice from the first image, skipping an image, taking a 2-pixel-wide slice from the next image two pixels over from the first one, ans so on. 

Now, I’m taking, say, a 22-pixel-wide slice from the first image and adding it to the output image, rather than replacing the pixels. I skip the next image like before. Then I take a 22-pixel-wide slice from the next image two pixels over from the first one, and add that to the output image. When I’m done, I’ll have contributions from 11 images in each column, and, in theory, will have reduced the shadow noise by the square root of 11., or about a factor of 3.3.

If you’re going to try this at home, there are a few details you need to think about. First, you’ve got to work in a linear RGB color space, so that the addition adds light. I’m using the Adobe RGB primaries and white point, and a gamma of unity. Next, you need to work at a precision that is significantly greater than your source or (maybe) your destination precision. My source precision was 8 bits per color plane; these were in-camera JPEGS. The destination precision was 16-bits per color plane. I used 64-bit floating point for the intermediate calculations. You also need to make sure the intermediate images don’t clip. I set the nominal image maximum at one, but the floating point representation that I used doesn’t clip until many orders of magnitude above that. 

After the summed output image is done, you need to normalize it back to unity full scale before converting it to integer representation. You could divide by the number of input images that contribute to each column in the final image, which was 11 in the example above. What I do is simpler. I don’t keep track of how many images contribute top each column; I just find the pixel with the highest R, G, or B value, and I divide the whole image by that value. Then I reapply the gamma and convert the image to 16-bit integer representation.

If you look at the output image partway through its construction, it looks like this:

slit-2-overlap-20

One of the neat things about constructing the output image this way is that the sharpness of objects that are identically lit in all of the images that contribute to a given column is not diminished by the wide, fuzzy synthetic slit that this simulates. You do lose sharpness of the lighting on transitions, but, so far, that has proven to be  a good thing aesthetically. 

By the way, you may see a bright vertical line in the above image. That’s a psychological effect sometimes referred to as Mach Banding.

 

The Last Word

← More slit scan experiments — fun with wind shear More slit scan experiments — dealing with vertical striping →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • JimK on How Sensor Noise Scales with Exposure Time
  • Štěpán Kaňa on Calculating reach for wildlife photography
  • Štěpán Kaňa on How Sensor Noise Scales with Exposure Time
  • JimK on Calculating reach for wildlife photography
  • Geofrey on Calculating reach for wildlife photography
  • JimK on Calculating reach for wildlife photography
  • Geofrey on Calculating reach for wildlife photography
  • Javier Sanchez on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • Mike MacDonald on Your photograph looks like a painting?
  • Mike MacDonald on Your photograph looks like a painting?

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.