One-dimensional sharpening

In the last couple of posts, I talked about how to smooth slit-scan photographs in the time direction. For the time being, I consider that a solved problem, at least for the succulents images.

These images require a lot of sharpening, because

  • the subject has a lot of low-contrast areas
  • there’s a lot of diffraction, because I’m using an aperture of f/45 on my 120mm f/5.6 Micro-Nikkor ED
  • even with that narrow f-stop, there are still parts of the image that are out of focus

I’ve been using Topaz Detail 3 for sharpening. It’s a very good program, allowing simultaneous sharpening at three different levels of detail, and having some “secret sauce” that all but eliminates halos and blown highlights. Like all sharpening programs that I’d used before this week, it sharpens in two dimensions.

However, I don’t want to sharpen in the time dimension, just the space one. Sharpening in the time dimension will provide no visual benefits — I’ve already smoothed the heck out of the image in that dimension — and could possible add noise and undo some of my smoothing.

I decided to write a Matlab program to perform a variant of  unsharp masking in just the space direction.

To review what one of the succulent images looks like after stitching and time-direction smoothing cast your eyes upon this small version of a 56000×6000 pixel image:

Overall381

The horizontal direction is time; the image will be rotated 90 degrees late in the editing process. The vertical direction is space, and is actually a horizontal line when the exposure is made.

Here’s the program I’m using to do sharpening in just the vertical direction, using a modification of the technique described in this patent.

First, I set up the file names, specify the coefficients to get luminance from Adobe RGB, and specify the standard deviations (aka sigmas) and weights of as many unsharp masking kernels as I’d like applied to the input images. There are four sets of sigmas and weights in this snippet:

1dsharpCode1

Then I read in a file and rotate the image if necessary so that the space direction is up and down:

1dsharpCode2

I convert the image from 16-bit unsigned integer representation to 64-bit floating point and remove the gamma correction, then compute a luminance image from that:

1dsharpCode3

I create a variable, accumulatedHp, to store the results of all the (in this case, four) high-pass filter operations, create a two-dimensional Gaussian convolution kernel using a built-in Matlab function called fspecial, take a one-dimensional vertical slice out of it, normalize that to one, apply the specified weight, and perform the high-pass filtering on the luminance image and store the result in a variable called hpLum, and accumulate the results of all the high-pass operations in accumulateHp:

1dsharpCode4

The I add one to all elements of the high-pass image to get a usm-sharpened luminance plane, and multiply that, pixel by pixel by each plane of the input image to get a sharpened version:

1dsharpCode5

Finally, i convert the sharpened image into gamma-corrected 16-bit unsigned integer representation and write it out to the disk:

1dsharpCode6

How does it work?

Pretty well. Here’s a section of the original image at 100%:

Image8orig

And here it is one-dimensionally sharpened with sigmas 3, 5, 15 and 15 pixels, and weights of 5, 5, 5, and 2:

Image83-5-15-35 5-5-5-2

If we up the weight of the 15-pixel high-pass operation to 9, we get this:

Image83-5-15-35 5-5-9-2

For comparison, here’s what results from a normal two-dimension unsharp masking operation in Photoshop, with weight of 300% and radius of 15 pixels:

Image8USM300-15

Finally, here’s what Topaz Detail 3 does, with small strength, small boost, medium strength and medium boost all set to  0.55, large strength set to .3 and large boost to 0:

Image8Topaz55-55-55-55-30-0

One thing that Topaz Detail does really well is keep the highlights from blowing out and the blacks from clipping. I’m going to have to look at that next unless I decide to bail and just do light one-dimensional sharpening in Matlab and the rest in Topaz Detail.

Eliminating median filtering in the time direction

Median filtering is computationally intensive at large extents, and Matlab is poor at parallelizing this operation. Here’s a graph of some timings for one-dimensional filtering of a 6000×56000 pixel image using both median filtering and averaging with a block filter of the same size as the median filter’s extent:

medVaAvgGraph2

I did a series of analyses to see which was better. I thought going in that median filtering was more appropriate, because it tends to preserve edges, and also that it is good at rejecting outliers entirely. I was right about rejecting outliers, but it turns out that preserving edges is not what I want. A source of edges in the time dimension is the several second recycling time of the Betterlight back, and I want to reject those edges.

Another source of edges is the artifacts that develop around sudden luminance transitions. I’m not sure of the source of these, but I suspect chromatic aberrations in the lens.

Here’s one with median filtering plus averaging:

medonly

And with averaging only:

avgonly

 

Averaging is better. It’s not usual that the computationally cheapest solution is the best, but it is here.

Note that averaging in the time direction (left to right in these pictures) does nothing for the blue artifact that runs in that direction. I’ll have to clean that up by hand later.

Downsampling and averaging

In yesterday’s post, I downsampled images successively, a factor of two at each step, in an attempt to get averaging at the same time. I was working with the images today, and it didn’t look like I was getting the desired effect.

Then it hit me.

I was doing exactly the wrong thing. Downsamping by a factor of two each time meant that there would always be a pixel at the source resolution right where I needed a pixel at the target resolution. Since I was using bilinear interpolation, I’d just get that pixel. I might as well have been using nearest neighbor!

Rather than figure out some tricky way to downsample in stages, I just applied an averaging filter in the time dimension, then downsampled in one step.

avgthenresample

Much better. Faster, too. Matlab is pretty darned swift at convolution.

Mitigating subject motion in slit scan images

As many of you know, I’ve been doing slit scans of plants, and I’ve been struggling with image artifacts due to subject motion. In the past few days I’ve been working on a Matlab program to deal with the artifacts and at the same time assemble several images into a complete, visually seamless composite.

In this post, I’ll walk through what the program does, and how it does it. The reason for posting this level of detail is not mainly to help out the photographers who are doing the same kind on photography — I figure that they’re thin on the ground — but to give those of you with some programming skills the kinds of things that you could do that might help your own photography by allowing you to do things that you can’t do in Photoshop and automate things that you have to do manually in Ps.

The files that form the basis for the final image are 14 in number, each one a 58176×6000 pixel TIFF produced by a Betterlight SUper 6K scanning back. Each file represents about 45 minutes total exposure. The program optionally performs median filtering on the images with a kernel that’s 128 pixels in the time direction and 1 pixel in the other one, reduces the number of pixels in the time direction by a factor of 16 using an algorithm calculated to average out noise, assembles all the images into one, then performs more median filtering. It writes out all the individual files and a series of composite ones with varying amounts of median filtering.

The first part of the program is just housekeeping, setting up file names and a few constants:

matlabhousekeeping

Median filtering is a computationally-intensive operation for large extents, and Matlab’s implementation does no parallelization, at least if you don’t have the parallel processing toolkit. Using 128×1 extents on the original files took a long time and, although it removed most of the artifacts, it didn’t get them all. So I was pleased to find that using a 8×1 extent on the files that had been compressed by a factor of 16 in the time dimension produced substantially the same results. I left the median filtering in the program, but made it optional, and turned that option off.

The way I compress the files in the time dimension is through repeated applications of bilinear interpolation, reducing the size of teh file by a factor of two at each step. The program next sets up a array of image sizes:

imagesizes

Then I read in the files, rotate them if necessary (Betterlight and Matlab don’t always see eye to eye on orientation), and (the first time around) set up the image container that will receive the composite image.

forloop1

 

Then I do the median filtering if it’s desired:

medfilt1

Then I do the successive resamplings and write out the files:

resample

Finally (for this for loop, anyway), I put the squeezed files into the composite container:

assemble

All that’s left is to do various amounts of median filtering and write out the filtered composite files:

finalmed1

There are short-duration disturbances due to wind, and longer ones that are caused by insects crawling around on the leaves of the plant. I thought at first that I’d have to pick between different extents of median filtering depending on what was going on minute to minute. However, I was pleasantly surprised to find that there was no perceptible degradation to the image sharpness even when using extents of 512 (!) pixels.

Here’s a insect-driven (slow) artifact with an extent of 32×1:

med32

It’s softened, but still ugly.

With an extent of 64×1, it looks like this:

med64

Almost all gone.

Doubling the extent again to 128, we get this:

med128

A hair better.

It looks like this operation is not going to require any hand tweaking, at least for the succulent pictures.

Success here has emboldened me to go back to subjects whose motion had stymied me in the past, like trees with blowing branches. More to come.

 

Traveling with Sony alpha 7’s

I returned on Sunday from a 9-day trip to the Canadian Rockies, staying outside of the town of Banff. It wasn’t specifically a photographic trip. It was mainly a family holiday. But, of course, I planned to make photographs. What kind of photographs? I thought a little landscape work even though the times of best light are usually when the family is asleep or eating. Most of the pictures would be people, though.

What gear to take?

I considered the delightful Nikon D810; which, by virtue of its improved autofocus, is a more versatile instrument than its predecessor. However, by the time I added a backup body and a few lenses, it turned out to be more weight than I wanted to carry, either in my travel bag or while hiking. Nikon lenses, at least the ones I have, are pretty heavy.

I considered taking an M240, the Sony alpha 7S as backup, and a few M lenses that could be used on either body, plus the Zony 55mm f/1.8 for when I wanted autofocus. That would have reduced the travel weight considerably. The Leica lenses tend to be small and light, even though they are dense. However, the M240 body is fairly weighty.

I finally settled on the Sony a7 and the a7S, the Zony 55mm, the Sony 70-200 f/4 OSS FE zoom, and the Leica 24mm f/3.8 Elmar. The Zony 55 is wonderfully sharp, light, and has AF. The zoom has unexceptional, but adequate, clarity, and is light compared to a 70-200 f/2.8, although no lighter than the Nikon equivalent. The Elmar is a crisp lens with great drawing, and it works well on the a7S (but not on the a7).

I figured I could make panos with the 55 if I wanted a lot of pixels, and it would be a good indoor lens. The Leica 24 would work well on the trail, and I could use zone focusing if things moved fast. The zoom would mostly be useful outside for people pictures, and could be pressed into service if I found cooperative animals.

I put a 64GB card in the a7S and a 128GB one in the a7, giving me a little over 5000 shots per camera. I brought along spare cards in a Pelican carrier, but did not intend to use them. I took 2 extra batteries and the Sony charger.

I took an Adorama Slinger bag and a LL Bean fanny pack. I could fit everything in the Slinger and everything but the zoom in the fanny pack. I use Domke wraps to keep things from banging together, and chalk bags for smaller lenses.

What did I learn?

In fluid situations, the alpha 7’s make you go to the menu system more than I’d like. This is especially difficult in bright light, and if you’re wearing a hat. What’s the deal with the hat? Bright light makes both the LCD screen and the EVF hard to see. When you’re wearing a hat and you lean closely over the LCD screen so that it’s shaded, the camera thinks that you’ve put it up to your eye and turns the LCD off. My standard for camera user interfaces is the Nikon D4, and I understand that the pro-level Canons are much the same. On the D4 – and the D8x0, for that matter – there are a series of dedicated buttons for the most commonly used functions, and a passive monochromatic LCD panel on top of the camera which if anything is more visible in bright light. Press the button, twiddle one or both of the control wheels, and you’re done. The Sony a7 series makes moves in the direction of direct access to some functions, with a dedicated exposure compensation control, the two top-level wheels (I’ve turned the rotating dial on the back of the camera off because it’s too easy to spin by accident), and the user-assignable buttons, but without the feedback provided by the Nikon’s top-of-camera LCD screen, when the light gets bright, you’re in the dark about what the camera is doing.

The alpha 7 series cameras are highly customizable. That’s a good thing. If you’re using two or more of them in one photo session, you’d be well advised to configure all the cameras as close to the same way as is possible. Mine were set up similarly, but not identically, and several times I said that, when I got back to the hotel, I’d take half an hour or so and set them up the same. I never did, though. Best to do it before the trip.

Although the autofocus of the a7 is not bad by mirrorless standards, and the a7S AF is pretty good when it’s really dark, neither camera can AF remotely as well as a D810. It doesn’t make much difference when the subject is stationary, but when things are moving around on you, your frame rate will drop and you’ll have more than a few misfocused images.

The low weight is a real boon on the trail. Having both cameras, the 55, and the 24 in a fanny pack makes for a light load.

There have been complaints about alpha 7 battery life. You will hear none from me. You can make almost a thousand images on one of the little batteries if you don’t chimp much. I carried two extras in my pack and never needed them.

In a week or so, I’ll have edited the images and will report more.

Nikon D810 high-ISO noise reduction

Yesterday’s experiments with the D810 dark-field noise, which showed that Nikon employs some low-pass filtering when long exposure noise reduction is turned on, made me wonder if there was any similar filtering at high ISO settings, a la the Sony a7S.

I looked at one of the green channels of exposures made at ISO 64 and ISO 50K at 1/8000 seconds, with all in-camera noise reduction turned off.

ISO 50K

ISO 50K

ISO 64

ISO 64

No low-pass filtering is evident.

What is you turn high ISO noise reduction on? Here is a dark-field exposure with it set to “High”:

D810Hi ISO NR Hi

No low-pass filtering is evident. I checked “Low” and “Normal” with the same results.

High ISO NR does do something, though. Here’s the histogram with it set to “Off”:

HiISONROff

And set to “High”:

HiISONRHi

 

 

Nikon D810 noise reduction raw processing

Yesterday I reported on some of the processing the D810 does to raw files, apparently in an attempt to reduce noise. A few weeks ago, I published these curves which look at dark-field noise vs shutter speed with the in-camera long exposure noise reduction on and off.

Long exposure noise reduction on

Long exposure noise reduction on

Long exposure noise reduction off

Long exposure noise reduction off

You will note that there is some processing taking place between (inclusively) 1/4 second and 1 second, that looks to be similar whether long exposure noise reduction is turned on or off.

You will also note that the long exposure noise reduction starts at 1.3 seconds, and that its effect is to increase the standard deviation of the dark-field noise, not decrease it, as you’d expect.

Let’s look at the histograms of the 1/5 second image and the 1/4 second one. It doesn’t matter if long exposure noise reduction is turned on or off; they look like this:

 

1.5 second

1/5 second

1/4 second

1/4 second

What are the differences? First, there’s that double filled histogram near the black point. That’s obvious, but probably unimportant. The big news is not in the graphical histograms, but in the numbers to their right, particularly the maximum values. The transition from 1/5 to 1/4 second invokes some kind of processing that has the effect of reducing or eliminating outlier pixels.

You’d think that something like that could affect the ability of the image to hold detail, but it doesn’t seem to hurt materially. Here are the spectra of the two images:

 

1/5 second

1/5 second

1.4 second

1/4 second

Now let’s look at what’s going on as the shutter speed changes from 1 second to 1.3 second with long exposure noise reduction turned on. Here are the relevant histograms:

 

1 second

1 second

 

1.3 seconds

1.3 seconds

You can see the bulk of the histogram getting broader as the shutter speed gets longer, which is what the graph at the top of this post indicated. But note the maxima: the 1,3 second image has a significantly lower maximum than the 1 second image.

Plotting the maxima rather than the standard deviation across the entire range of shutter speeds tells the story, albeit noisily.

D810MaxRNvsshutterNRoff D810MaxRNvsshutterNRon

 

Except for two green channel maxima, long exposure noise reduction substantially reduces the values of the worst case (hot) pixels.

The maximum is not the best measure for hot pixels; the 99th percentile, or even the the 99.9th percentile, would be better. However, Rawdigger doesn’t — yet — give you that information, and I don’t think this is important enough to write a Matlab script to do the job. (If someone wants it done, sing out and I’ll give it a shot.)

Now let’s look at the spectra of those two exposures:

1 second

1 second

1.3 seconds

1.3 seconds

The low pass spatial filtering component of the D810′s long exposure noise reduction appears to be independent of shutter speed, at least in the range between 1.3 seconds and 15 seconds.

Nikon D810 long exposure noise reduction

There has been some discussion on the web about long-exposure noise reduction in the Nikon D810 that occurs even when the menu setting for such process is set to off.  I have seen some indications of this kind of processing in my dark-field versus shutter speed tests.

Now that I have a tool for analyzing the spatial spectra of images, I can take a look at some of the Nikon dark-field images made when I took the data for these graphs.

First, the dark-field spectrum for the green channel, 1/8000 second at ISO 1000:

D810RN8000thG

Except at very low spatial frequencies, the spectra are flat, which indicates white noise, which means no spatial filtering.

Here’s the histogram of that image:

histo 203

There’s a little clipping at a bit over 570, and there are dropouts in the red and blue channels because of Nikon’s 14-bit digital white balance prescaling, but the histograms look normal otherwise.

Now at 15 seconds with the camera set for no long exposure noise reduction:

D810RN15s254G

There is essentially no low-pass spatial filtering taking place.

The histogram of the 15 second, no NR image:

histo 254

A little different, particularly the double-high bucket at around 600 in the red and blue channels. You’d expect this with digital prescaling, and I don’t know why it didnt’ occur with the 1/8000 second image.

Now with long exposure noise reduction invoked:

D810RN15sNRonG

Quite a bit of spatial filtering.

The histogram:

histo 306

There’s only one empty bucket. Thus, the spatial filtering is not median filtering, which cannot fill holes in the histogram; it must be some kind of averaging filter.

I’ll do some more testing and report.

 

 

CCD vs CMOS: an end to the war?

“What war?” you say, “CMOS won that war a long time ago.” If the criterion is chips shipped, I agree. But CCDs have their fans in the photographic community, and they tend to be vocal. I’m  from Missouri about the supposed color rendering advantages of CCD’s, and I really don’t like the poor dynamic range of current CCD implementations.

There is now technology available to freely mix and match CMOS and CCD structures on the same chip.

This should allow on-chip ADCs with CCD arrays.  As the paper states, it could also offer more flexibility in global shutter implementations. Although no consumer cameras employ this technology yet, they could soon, given sufficient demand.

 

Easy ETTR for Canon users

From the mailbag:

Thanks for the blog series on ETTR and UniWB. I’m not sure, but I may have been among the earliest to describe the trivial way for UniWb with Canon: lens cap black or totally blown over-exposure. I was unable to easily find the date on your blog article.

You may or may not be aware that Magic Lantern (ML) implemented RAW histogram, leading to RAW blinkies, RAW preview, RAW review, auto-ETTR, and eventually to RAW video. With that feature, UniWB is a non-issue … totally unnecessary.

Really not meaning to “pat myself on the back”, but that came about due to my “Feature Request” for RawDigger-like capabilities in ML.

http://www.magiclantern.fm/forum/index.php?topic=5149.msg31959#msg31959