This post is part of a series about some experiments I’m doing combining space and time in slit scan photographs. The series starts here.
Because using a raw workflow is too cumbersome with tens of thousands of images per setup, I’ve been using Adobe RGB JPEG images. There’s an issue with those if you push the shadows at lot, since the precision in the dark areas, with a gamma of 2.2 and 8-bit precision, is marginal.
Thanks to the prompting of a reader, I’ve come up with a way to improve the shadow noise.
The trick is to make the simulated slit wider, while advancing it at the same rate. It’ll probably be clearer with some numbers. Let’s say the slit is 2 pixels wide. The way I was doing things before, I was taking a 2-pixel-wide slice from the first image, skipping an image, taking a 2-pixel-wide slice from the next image two pixels over from the first one, ans so on.
Now, I’m taking, say, a 22-pixel-wide slice from the first image and adding it to the output image, rather than replacing the pixels. I skip the next image like before. Then I take a 22-pixel-wide slice from the next image two pixels over from the first one, and add that to the output image. When I’m done, I’ll have contributions from 11 images in each column, and, in theory, will have reduced the shadow noise by the square root of 11., or about a factor of 3.3.
If you’re going to try this at home, there are a few details you need to think about. First, you’ve got to work in a linear RGB color space, so that the addition adds light. I’m using the Adobe RGB primaries and white point, and a gamma of unity. Next, you need to work at a precision that is significantly greater than your source or (maybe) your destination precision. My source precision was 8 bits per color plane; these were in-camera JPEGS. The destination precision was 16-bits per color plane. I used 64-bit floating point for the intermediate calculations. You also need to make sure the intermediate images don’t clip. I set the nominal image maximum at one, but the floating point representation that I used doesn’t clip until many orders of magnitude above that.
After the summed output image is done, you need to normalize it back to unity full scale before converting it to integer representation. You could divide by the number of input images that contribute to each column in the final image, which was 11 in the example above. What I do is simpler. I don’t keep track of how many images contribute top each column; I just find the pixel with the highest R, G, or B value, and I divide the whole image by that value. Then I reapply the gamma and convert the image to 16-bit integer representation.
If you look at the output image partway through its construction, it looks like this:
One of the neat things about constructing the output image this way is that the sharpness of objects that are identically lit in all of the images that contribute to a given column is not diminished by the wide, fuzzy synthetic slit that this simulates. You do lose sharpness of the lighting on transitions, but, so far, that has proven to be a good thing aesthetically.
By the way, you may see a bright vertical line in the above image. That’s a psychological effect sometimes referred to as Mach Banding.
Leave a Reply