Noise reduction and downsampling with a gamma of 2.2

In yesterday’s post I reported on the effects on photon noise when downsampling monochromatic images in a linear space. However, in Photoshop (Ps) at least, we hardly even downsample in a linear space. sRGB and Adobe RGB have gammas of 2.2, and ProPhoto RGB  has a gamma of 1.8.

Changing the gamma to 2.2 has almost no effect on the rms noise. That’s nice.

However, changing gamma to 2.2 does affect the histogram of the noise. I set out to measure that.

In statistics there are two central moments immediately beyond the variance, the square root of which is the standard deviation. They are called skewness and kurtosis. Skewness measures asymmetry in the histogram, and kurtosis measures peakiness.

Here’s the skewness we get when downsampling images using a pre-scaling Gaussian AA filter:

skewGp2gamma2p2

It’s clear that all the resampling algorithms, including nearest neighbor, tilt the histogram.  It looks like the effect begins to go away by the time the magnification is below 1/8. It’s also clear the magnification of 1/2 is special.

When we look at kurtosis, we see this:

kurtGp2gamma2p2

Three is the kurtosis of a Normal distribution. All the resampling algorithms have to a greater or lessor extent, the ability to make the output histogram peakier than that of a Gaussian. Magnificatio of 1/2 is again, special.

Do these things affect the way that noise is perceived? I frankly don’t know, at this point.

Noise reduction and downsampling

It is axiomatic in photography that photon noise decreases upon downsampling in direct proportion to the magnification ratio — the width (or height) of the output image over that of the input image.

Downsampling by a factor of two should cut the noise in half. That’s what would happen if you averaged four input image pixels for every pixel in the output.

This reasoning is one of the bases for photographic equivalence. It’s at the root of the way that DxOMark compares cameras of unequal pixel count. It’s accepted without  much thought. I qualify my statement, because there are some who don’t buy the notion.

It’s clear to me that binning pixels — averaging the values of adjacent captured pixels before or after conversion to digital form — will reduce photon noise as stated above, although the two operations have different effects on read noise. What’s not clear is how good the standard ways that photographers use to reduce image size are at reducing noise.

I set out to do some testing. I wrote a Matlab program that I’ll be posting later that created a 4000×4000 monochromatic floating point image, and filled it with Gaussian noise with a mean of 0.5 and standard deviation of 0.1. I then downsampled it using various common algorithms: bilinear interpolation, bicubic interpolation, Lanczos 2 and Lanczos 3. at 200 different output image sizes, ranging from the same size as the input image to less that 1/8 that size in each dimension.

Then I measured the standard deviation of the result. That’s the same as the rms value of the noise. If downsampling operates on noise the way that photographic equivalence says it does, then we should get curves that look like this:

ideal rms noise

But that’s not what happens. Instead, our curves look like this:

rms no aa

Whoa! Except for some “magic magnifications” — 1/2,  1/4, and 1/8 — the noise reductions are all the same for each algorithm. Bilinear interpolation is the best; it hits the ideal number at a magnification of 1/2, and the sharper downsampling algorithms are all worse. None of the algorithms do to the noise what we’d like them to do.

Bart van der Wolf is a smart guy who has looked extensively at resizing algorithms. Here are a few of his thoughts about downsampling. One of the things he recommends is subjecting the image to be downsampled to an Gaussian anti-aliasing (AA) filter whose sigma in pixels is 0.2 or 0.3 over the magnification.

Here’s what you get with a AA sigma of 0.2/magnification:

rmsp2gaussian

And here are the curves with sigma equal to 0.3 / magnification:

rmsp3gaussian

Well that’s better. With 0.3 / magnification, by the time we get to 1/4 size, we’re pretty much on the ideal curve.

But what does the AA filter do to image detail? Are the downsampled noise images still Gaussian? What would happen if we downsampled in a gamma-corrected color space? What about the demosaicing process? What do these algorithms have to do with the ones built into Photoshop and Lightroom?

Stay tuned.

Showing work

Kim Weston is a talented photographer who specializes in nudes.   He offers workshops both in the field and at Wildcat Hill,  his history-rich house and studio, where his grandfather, Edward Weston, lived, worked, and made many of his famous images. Kim had a workshop over the weekend, and asked me to come by to show some of my work to his students after Saturday night dinner.

I jumped at the chance.

Not to sell pictures; I’ve shown work at Kim’s and other workshops and never sold a thing as a result. I think workshops can be a good source of print customers for the photographers who teach them. I’ve been such a print buyer often. However, most people buy prints one a a time, and if you’re going to buy a print from a teacher, you’re probably not going to buy one from anybody else at the same time.

Betterlight_0330

Buttermilk Sunset

For me, there are two strong reasons to show work to a knowledgeable audience.

First, there’s self-expression. For me, the satisfactions of making art are twofold: creating something that feels right, and showing it to someone who appreciates it. The audience has little to do with the creating part. In fact, if you think about the audience much during the art-making, you’ll compromise your art. But the job of being a photographic artist isn’t done until someone sees your work.

Sunset with Contrail

Sunset with Contrail

The Internet is a marvelous medium for getting your work out into the world, but there’s little satisfaction for me unless I know that the work is appreciated. Web traffic logs — at best — count eyeballs, but that’s not enough. I love it when I get emails from people who’ve seen my web galleries and appreciate what I’ve done, but that happens with disappointing frequency.

So the chance to be in the same room with the people seeing my work for the first time, to see their faces, observe their body language, and hear their questions and comments, is precious to me.

Second, there’s a reason to do more work. Some people treat an invitation to show work as nothing more involving than a trip to the boxes or flat files to pick out the prints to bring. If you have that mindset, preparing to show work is a curatorial exercise. For me it’s a chance to revisit a body of work and see if I can improve it.

The most exciting series for just about any photographer is the one they’re currently working on. I’m no different. I’ve been working on writing Matlab code to improve the Timescapes images, and last week I did my best to improve already-captured exposures in that series. I find that the  prospect of an audience energizes me and pushes me to produce. That’s always a good thing.

Betterlight_00189-1

Wind Shear

There’s a downside to my approach to opportunities to show work; the audience just gets to see a silo of your range. Since I’ve already made my peace with zero sales, that’s not costing anything, but it does mean that the audience has no context in which to see the work you’re showing. We all know that no body of work arises in splendid isolation from everything you’ve ever done before, and it would be nice to give viewers the chance to make connections with other series than the one you’re showing.

Maybe next time I’ll do a little curation of some of my other work and show both.

 

Lightroom memory use vs file size

On a forum that I frequent, a poster made the assertion that Lightroom memory usage was not so much a matter of the number of pixels in the image, but of the number of bytes in the file. That didn’t make any sense to me. I figured that Lr, like Photoshop, would decompress any file it was working on, so that the memory load would be a strong function of the number of pixels in the file. I also figured that Lr, unlike Ps, would convert all 8 bit-per-color-plane images into 16-bit or 32-bit ones, since it uses a linear version of ProPhoto RGB as its working space, and that would posterize with 8 bits per color plane.

I set up a test. I created a big image:

bigtestimagedim

I filled the image with a highly-compressible simultaneous contrast optical illusion:

SimContrastIllf8sm

I wrote it out as uncompressed TIFF, LZW-compressed TIFF, and JPEG, both in 16-bit and 8-bit versions:

filesizes

You can see that Ps writes 8-bit JPEGs even when it’s writing them from 16-bit images. We have a range of file sizes of about 300:1. Will Lr’s memory usage reflect anywhere near that range?

I imported all the images into Lr, picked the 16-bit uncompresssed TIFF file in the gallery, but left it in light table mode. I shut down Lr, waited for the OS to take back all the memory Lr was using, and started it again. Here’s how much memory it used after it settled down (when it first comes up, it grabs a lot of memory (6 or 8 GB on my machine), then rapidly gives it back to the OS:

Lropen

Well under half a GB.

Then I picked the same 16-bit uncompressed TIFF file, and opened up the Develop module. Then I shut down Lr, waited for its memory footprint to go to zero (you have to do this to get consistent results), and started the program again. Here’s what I saw:

16btiffdev

It looks like Lr stores a 16-bit file as a 16-bit image, at least before it does something to it.

I went back to the gallery, picked the LZW compressed 16-bit TIFF, shut down Lr, waited, and opened it again:

16btiffcompresseddev

The same memory footprint as the uncompressed file, even though the uncompressed file is 64 times as large. That’s what I expected.

I turned my attention to the 8-bit uncompressed TIFF. It uses this much memory:

8btiffdev

Wait a minute! The 8-bit file takes up more memory than the 16-bit one? Oh wait, Lr probably is keeping a copy of the 8 bit file around in addition to the 16-bit version that it created to work with.

What about an 8-bit compressed TIFF? Here you go:

8btiffcompdev

It looks like Lr is uncompressing the 8-bit file and keeping a copy of it around after it converts it to 16-bit.

Here’s the situation with the JPEG file:

8bitJPEGdev

Hmm. Lr makes a 16-bit version, but doesn’t keep the 8-bit uncompressed file like it does with TIFF.

OK, what about a 32-bit floating-point TIFF? I went back to Ps and converted the base file to 32-bits per color plane, then wrote the image out as a 32-bit compressed file. It compressed quite nicely:

filesize32

I imported the file into Lr, went to the Develop module, shut down Lr, waited, and opened it again. Here’s what I saw:

32bitfpTIFF

That’s too much memory for a 32-bit version of the image. So Lr probably stores a 32-bit floating point version of the image and another version. There’s almost, but not quite enough room for both a 32-bit and a 16-bit version. This needs more research.

So a 2GB file (uncompressed 16-bit TIFF) can take up less Lr memory than a 6 MB file (compressed 8-bit TIFF), and a 100 MB file (compressed 32-bit TIFF) can take up more than twice as much memory as a 2GB file  (uncompressed 16-bit TIFF again).

Another slit-scan sunset

While I was looking for the slit-scan sunset of yesterday’s and the day before’s post, I found one that I’d never printed:

Betterlight_00071

Here’s what it took to make it work:

bl00071layers

No Matlab work required. Also, note that I didn’t create that place in the center of the image where the sun brightened up.

Tweaking the slit-scan sunset image

I managed to restore the fog to the lower left corner of the sunset image in the last post. When I edited the first version, I used a Photoshop plug-in called Contrast Master to give the clouds some sock and pull up the details in the dark areas. That plug-in in no longer installed on my main workstation, and I had no other way to make the fog look the way I liked it.

So I stole the fog from the previous version of the image, low-pass filtered it to removed some posterization — it is really down in the muck — sharpened it to get the edges back, brushed down the edges that were then too light, and plunked it down in the new image.

Now the layers look like this:

sssunsetwfog

And the result is:

Betterlight_00041-Matlab6cr-3

 

Matlab meets a new slit-scan image

I’m working on material for a presentation this weekend — more on that in a future post — and I decided to print this slit-scan sunset image:

Sunset with Fog Coming (n 0041

But when I looked at it closely, I saw some artifacts near the horizon:

Sunset with Fog Coming (n 0041

I figured the Matlab one-dimensional averaging code that I created for the succulents pictures  could fix that with little change. I was right:

Betterlight_00041-Matlab6cr-2

But here’s the surprise; when I reworked the image I came upon a whole new conception;

Betterlight_00041-Matlab6cr

Much more subtle and moody. Less in your face. I like it.

The reworking involved a lot of steps:

sunset layers

 

Matlab 10 is an image filtered with a 1×1024 (2^10) kernel, and Matlab 6 is filtered with a 1×64 (2^6) kernel. I only needed a little bit of Matlab 10. The Shadows and Highlights layer whose name doesn’t fit is derived from Matlab 6, with Topaz Adjust and Topaz Detail enhancements.

Slit scan image processing

My silence over the last few days has not been because I’m on vacation. On the contrary, I’ve been really busy figuring out how to process the succulent slit-scan images. Doing it all in Matlab offers the most flexibility, but there’s not much interactivity. I create a set of parameters, process a bunch of images with them, wait a few minutes for the computer to do its work, look at the results, and think up a new set of parameters to try.

That works OK, but not really well, if there’s no clipping of the sharpened images to deal with. If there is, I’m at sea. I haven’t found a automagic way to deal with clipping like Topaz Detail 3 does, and messing around with some algorithms has given me great respect for the people who invented the Topaz Detail ones. It’s clear to me that I could spend weeks or months fiddling with code and still not come up with anything as good as Topaz has.

Therefor, I’ve redefined success. I’m just doing the higher-frequency (smaller kernel — say, up to 15 pixel sigma) sharpening one-dimensionally. I use Topaz Detail for the lower-frequency work. One reason I can do that is that the noise in the image is so low, and another is that I’m using the first pass of Topaz Detail on 56000×6000 pixel images, and I’ll be squishing them in the time (long) dimension later, so round kernels become elliptical after squeezing. Doing the 1D sharpening with small kernels makes visible clipping less likely.

Another important reason for my progress was that I’ve found a way to make the adjustments of the 1D image sharpening interactive. Rather than have the Matlab program construct the entire 1D filtered image, I’m having it write out monochromatic sharpened images at each kernel size, with aggressive (amazingly — at least to me — high) weights:

1dlumcode

 

Then I bring the original image plus layers for all the sharpened ones into Photoshop, and set the layer blend modes for the sharpened images to “Luminosity”:

1dlumlayers

Then I adjust each layer’s opacity to taste. 

Finally, when I see objectionable clipping, I brush black into the layer mask for the layer(s) that are making it happen. 

Not mathematically elegant. Not really what I was looking for at all when I started this project. But it gets the job done, and well. 
I may run into a problem with this method down the road, but it’s working for me on the one image I’ve tried it on.

One thing I tried that sort of worked for dealing with highlight clipping was scaling the floating point image file so that the brightest value in any color plane was unity, saving that as a 32-bit floating point TIFF, importing that into Lightroom, and using that program’s tone mapping functions. Lr treats the data in 32-bit FP files as scene-referred, so the tools are appropriate for dealing with clipping. For example, Lr’s Exposure tool produces non-linear saturation.

The technique worked moderately well, but I had some problems in the shadow areas. I decided to abandon it, and so I never got to the more difficult problem of what to do about black clipping. I did notice that Lr truncates negative values in 32-bit FP TIFFs.