Showing work

Kim Weston is a talented photographer who specializes in nudes.   He offers workshops both in the field and at Wildcat Hill,  his history-rich house and studio, where his grandfather, Edward Weston, lived, worked, and made many of his famous images. Kim had a workshop over the weekend, and asked me to come by to show some of my work to his students after Saturday night dinner.

I jumped at the chance.

Not to sell pictures; I’ve shown work at Kim’s and other workshops and never sold a thing as a result. I think workshops can be a good source of print customers for the photographers who teach them. I’ve been such a print buyer often. However, most people buy prints one a a time, and if you’re going to buy a print from a teacher, you’re probably not going to buy one from anybody else at the same time.

Betterlight_0330

Buttermilk Sunset

For me, there are two strong reasons to show work to a knowledgeable audience.

First, there’s self-expression. For me, the satisfactions of making art are twofold: creating something that feels right, and showing it to someone who appreciates it. The audience has little to do with the creating part. In fact, if you think about the audience much during the art-making, you’ll compromise your art. But the job of being a photographic artist isn’t done until someone sees your work.

Sunset with Contrail

Sunset with Contrail

The Internet is a marvelous medium for getting your work out into the world, but there’s little satisfaction for me unless I know that the work is appreciated. Web traffic logs — at best — count eyeballs, but that’s not enough. I love it when I get emails from people who’ve seen my web galleries and appreciate what I’ve done, but that happens with disappointing frequency.

So the chance to be in the same room with the people seeing my work for the first time, to see their faces, observe their body language, and hear their questions and comments, is precious to me.

Second, there’s a reason to do more work. Some people treat an invitation to show work as nothing more involving than a trip to the boxes or flat files to pick out the prints to bring. If you have that mindset, preparing to show work is a curatorial exercise. For me it’s a chance to revisit a body of work and see if I can improve it.

The most exciting series for just about any photographer is the one they’re currently working on. I’m no different. I’ve been working on writing Matlab code to improve the Timescapes images, and last week I did my best to improve already-captured exposures in that series. I find that the  prospect of an audience energizes me and pushes me to produce. That’s always a good thing.

Betterlight_00189-1

Wind Shear

There’s a downside to my approach to opportunities to show work; the audience just gets to see a silo of your range. Since I’ve already made my peace with zero sales, that’s not costing anything, but it does mean that the audience has no context in which to see the work you’re showing. We all know that no body of work arises in splendid isolation from everything you’ve ever done before, and it would be nice to give viewers the chance to make connections with other series than the one you’re showing.

Maybe next time I’ll do a little curation of some of my other work and show both.

 

Lightroom memory use vs file size

On a forum that I frequent, a poster made the assertion that Lightroom memory usage was not so much a matter of the number of pixels in the image, but of the number of bytes in the file. That didn’t make any sense to me. I figured that Lr, like Photoshop, would decompress any file it was working on, so that the memory load would be a strong function of the number of pixels in the file. I also figured that Lr, unlike Ps, would convert all 8 bit-per-color-plane images into 16-bit or 32-bit ones, since it uses a linear version of ProPhoto RGB as its working space, and that would posterize with 8 bits per color plane.

I set up a test. I created a big image:

bigtestimagedim

I filled the image with a highly-compressible simultaneous contrast optical illusion:

SimContrastIllf8sm

I wrote it out as uncompressed TIFF, LZW-compressed TIFF, and JPEG, both in 16-bit and 8-bit versions:

filesizes

You can see that Ps writes 8-bit JPEGs even when it’s writing them from 16-bit images. We have a range of file sizes of about 300:1. Will Lr’s memory usage reflect anywhere near that range?

I imported all the images into Lr, picked the 16-bit uncompresssed TIFF file in the gallery, but left it in light table mode. I shut down Lr, waited for the OS to take back all the memory Lr was using, and started it again. Here’s how much memory it used after it settled down (when it first comes up, it grabs a lot of memory (6 or 8 GB on my machine), then rapidly gives it back to the OS:

Lropen

Well under half a GB.

Then I picked the same 16-bit uncompressed TIFF file, and opened up the Develop module. Then I shut down Lr, waited for its memory footprint to go to zero (you have to do this to get consistent results), and started the program again. Here’s what I saw:

16btiffdev

It looks like Lr stores a 16-bit file as a 16-bit image, at least before it does something to it.

I went back to the gallery, picked the LZW compressed 16-bit TIFF, shut down Lr, waited, and opened it again:

16btiffcompresseddev

The same memory footprint as the uncompressed file, even though the uncompressed file is 64 times as large. That’s what I expected.

I turned my attention to the 8-bit uncompressed TIFF. It uses this much memory:

8btiffdev

Wait a minute! The 8-bit file takes up more memory than the 16-bit one? Oh wait, Lr probably is keeping a copy of the 8 bit file around in addition to the 16-bit version that it created to work with.

What about an 8-bit compressed TIFF? Here you go:

8btiffcompdev

It looks like Lr is uncompressing the 8-bit file and keeping a copy of it around after it converts it to 16-bit.

Here’s the situation with the JPEG file:

8bitJPEGdev

Hmm. Lr makes a 16-bit version, but doesn’t keep the 8-bit uncompressed file like it does with TIFF.

OK, what about a 32-bit floating-point TIFF? I went back to Ps and converted the base file to 32-bits per color plane, then wrote the image out as a 32-bit compressed file. It compressed quite nicely:

filesize32

I imported the file into Lr, went to the Develop module, shut down Lr, waited, and opened it again. Here’s what I saw:

32bitfpTIFF

That’s too much memory for a 32-bit version of the image. So Lr probably stores a 32-bit floating point version of the image and another version. There’s almost, but not quite enough room for both a 32-bit and a 16-bit version. This needs more research.

So a 2GB file (uncompressed 16-bit TIFF) can take up less Lr memory than a 6 MB file (compressed 8-bit TIFF), and a 100 MB file (compressed 32-bit TIFF) can take up more than twice as much memory as a 2GB file  (uncompressed 16-bit TIFF again).

Another slit-scan sunset

While I was looking for the slit-scan sunset of yesterday’s and the day before’s post, I found one that I’d never printed:

Betterlight_00071

Here’s what it took to make it work:

bl00071layers

No Matlab work required. Also, note that I didn’t create that place in the center of the image where the sun brightened up.

Tweaking the slit-scan sunset image

I managed to restore the fog to the lower left corner of the sunset image in the last post. When I edited the first version, I used a Photoshop plug-in called Contrast Master to give the clouds some sock and pull up the details in the dark areas. That plug-in in no longer installed on my main workstation, and I had no other way to make the fog look the way I liked it.

So I stole the fog from the previous version of the image, low-pass filtered it to removed some posterization — it is really down in the muck — sharpened it to get the edges back, brushed down the edges that were then too light, and plunked it down in the new image.

Now the layers look like this:

sssunsetwfog

And the result is:

Betterlight_00041-Matlab6cr-3

 

Matlab meets a new slit-scan image

I’m working on material for a presentation this weekend — more on that in a future post — and I decided to print this slit-scan sunset image:

Sunset with Fog Coming (n 0041

But when I looked at it closely, I saw some artifacts near the horizon:

Sunset with Fog Coming (n 0041

I figured the Matlab one-dimensional averaging code that I created for the succulents pictures  could fix that with little change. I was right:

Betterlight_00041-Matlab6cr-2

But here’s the surprise; when I reworked the image I came upon a whole new conception;

Betterlight_00041-Matlab6cr

Much more subtle and moody. Less in your face. I like it.

The reworking involved a lot of steps:

sunset layers

 

Matlab 10 is an image filtered with a 1×1024 (2^10) kernel, and Matlab 6 is filtered with a 1×64 (2^6) kernel. I only needed a little bit of Matlab 10. The Shadows and Highlights layer whose name doesn’t fit is derived from Matlab 6, with Topaz Adjust and Topaz Detail enhancements.

Slit scan image processing

My silence over the last few days has not been because I’m on vacation. On the contrary, I’ve been really busy figuring out how to process the succulent slit-scan images. Doing it all in Matlab offers the most flexibility, but there’s not much interactivity. I create a set of parameters, process a bunch of images with them, wait a few minutes for the computer to do its work, look at the results, and think up a new set of parameters to try.

That works OK, but not really well, if there’s no clipping of the sharpened images to deal with. If there is, I’m at sea. I haven’t found a automagic way to deal with clipping like Topaz Detail 3 does, and messing around with some algorithms has given me great respect for the people who invented the Topaz Detail ones. It’s clear to me that I could spend weeks or months fiddling with code and still not come up with anything as good as Topaz has.

Therefor, I’ve redefined success. I’m just doing the higher-frequency (smaller kernel — say, up to 15 pixel sigma) sharpening one-dimensionally. I use Topaz Detail for the lower-frequency work. One reason I can do that is that the noise in the image is so low, and another is that I’m using the first pass of Topaz Detail on 56000×6000 pixel images, and I’ll be squishing them in the time (long) dimension later, so round kernels become elliptical after squeezing. Doing the 1D sharpening with small kernels makes visible clipping less likely.

Another important reason for my progress was that I’ve found a way to make the adjustments of the 1D image sharpening interactive. Rather than have the Matlab program construct the entire 1D filtered image, I’m having it write out monochromatic sharpened images at each kernel size, with aggressive (amazingly — at least to me — high) weights:

1dlumcode

 

Then I bring the original image plus layers for all the sharpened ones into Photoshop, and set the layer blend modes for the sharpened images to “Luminosity”:

1dlumlayers

Then I adjust each layer’s opacity to taste. 

Finally, when I see objectionable clipping, I brush black into the layer mask for the layer(s) that are making it happen. 

Not mathematically elegant. Not really what I was looking for at all when I started this project. But it gets the job done, and well. 
I may run into a problem with this method down the road, but it’s working for me on the one image I’ve tried it on.

One thing I tried that sort of worked for dealing with highlight clipping was scaling the floating point image file so that the brightest value in any color plane was unity, saving that as a 32-bit floating point TIFF, importing that into Lightroom, and using that program’s tone mapping functions. Lr treats the data in 32-bit FP files as scene-referred, so the tools are appropriate for dealing with clipping. For example, Lr’s Exposure tool produces non-linear saturation.

The technique worked moderately well, but I had some problems in the shadow areas. I decided to abandon it, and so I never got to the more difficult problem of what to do about black clipping. I did notice that Lr truncates negative values in 32-bit FP TIFFs.

 

One-dimensional sharpening

In the last couple of posts, I talked about how to smooth slit-scan photographs in the time direction. For the time being, I consider that a solved problem, at least for the succulents images.

These images require a lot of sharpening, because

  • the subject has a lot of low-contrast areas
  • there’s a lot of diffraction, because I’m using an aperture of f/45 on my 120mm f/5.6 Micro-Nikkor ED
  • even with that narrow f-stop, there are still parts of the image that are out of focus

I’ve been using Topaz Detail 3 for sharpening. It’s a very good program, allowing simultaneous sharpening at three different levels of detail, and having some “secret sauce” that all but eliminates halos and blown highlights. Like all sharpening programs that I’d used before this week, it sharpens in two dimensions.

However, I don’t want to sharpen in the time dimension, just the space one. Sharpening in the time dimension will provide no visual benefits — I’ve already smoothed the heck out of the image in that dimension — and could possible add noise and undo some of my smoothing.

I decided to write a Matlab program to perform a variant of  unsharp masking in just the space direction.

To review what one of the succulent images looks like after stitching and time-direction smoothing cast your eyes upon this small version of a 56000×6000 pixel image:

Overall381

The horizontal direction is time; the image will be rotated 90 degrees late in the editing process. The vertical direction is space, and is actually a horizontal line when the exposure is made.

Here’s the program I’m using to do sharpening in just the vertical direction, using a modification of the technique described in this patent.

First, I set up the file names, specify the coefficients to get luminance from Adobe RGB, and specify the standard deviations (aka sigmas) and weights of as many unsharp masking kernels as I’d like applied to the input images. There are four sets of sigmas and weights in this snippet:

1dsharpCode1

Then I read in a file and rotate the image if necessary so that the space direction is up and down:

1dsharpCode2

I convert the image from 16-bit unsigned integer representation to 64-bit floating point and remove the gamma correction, then compute a luminance image from that:

1dsharpCode3

I create a variable, accumulatedHp, to store the results of all the (in this case, four) high-pass filter operations, create a two-dimensional Gaussian convolution kernel using a built-in Matlab function called fspecial, take a one-dimensional vertical slice out of it, normalize that to one, apply the specified weight, and perform the high-pass filtering on the luminance image and store the result in a variable called hpLum, and accumulate the results of all the high-pass operations in accumulateHp:

1dsharpCode4

The I add one to all elements of the high-pass image to get a usm-sharpened luminance plane, and multiply that, pixel by pixel by each plane of the input image to get a sharpened version:

1dsharpCode5

Finally, i convert the sharpened image into gamma-corrected 16-bit unsigned integer representation and write it out to the disk:

1dsharpCode6

How does it work?

Pretty well. Here’s a section of the original image at 100%:

Image8orig

And here it is one-dimensionally sharpened with sigmas 3, 5, 15 and 15 pixels, and weights of 5, 5, 5, and 2:

Image83-5-15-35 5-5-5-2

If we up the weight of the 15-pixel high-pass operation to 9, we get this:

Image83-5-15-35 5-5-9-2

For comparison, here’s what results from a normal two-dimension unsharp masking operation in Photoshop, with weight of 300% and radius of 15 pixels:

Image8USM300-15

Finally, here’s what Topaz Detail 3 does, with small strength, small boost, medium strength and medium boost all set to  0.55, large strength set to .3 and large boost to 0:

Image8Topaz55-55-55-55-30-0

One thing that Topaz Detail does really well is keep the highlights from blowing out and the blacks from clipping. I’m going to have to look at that next unless I decide to bail and just do light one-dimensional sharpening in Matlab and the rest in Topaz Detail.

Eliminating median filtering in the time direction

Median filtering is computationally intensive at large extents, and Matlab is poor at parallelizing this operation. Here’s a graph of some timings for one-dimensional filtering of a 6000×56000 pixel image using both median filtering and averaging with a block filter of the same size as the median filter’s extent:

medVaAvgGraph2

I did a series of analyses to see which was better. I thought going in that median filtering was more appropriate, because it tends to preserve edges, and also that it is good at rejecting outliers entirely. I was right about rejecting outliers, but it turns out that preserving edges is not what I want. A source of edges in the time dimension is the several second recycling time of the Betterlight back, and I want to reject those edges.

Another source of edges is the artifacts that develop around sudden luminance transitions. I’m not sure of the source of these, but I suspect chromatic aberrations in the lens.

Here’s one with median filtering plus averaging:

medonly

And with averaging only:

avgonly

 

Averaging is better. It’s not usual that the computationally cheapest solution is the best, but it is here.

Note that averaging in the time direction (left to right in these pictures) does nothing for the blue artifact that runs in that direction. I’ll have to clean that up by hand later.

Downsampling and averaging

In yesterday’s post, I downsampled images successively, a factor of two at each step, in an attempt to get averaging at the same time. I was working with the images today, and it didn’t look like I was getting the desired effect.

Then it hit me.

I was doing exactly the wrong thing. Downsamping by a factor of two each time meant that there would always be a pixel at the source resolution right where I needed a pixel at the target resolution. Since I was using bilinear interpolation, I’d just get that pixel. I might as well have been using nearest neighbor!

Rather than figure out some tricky way to downsample in stages, I just applied an averaging filter in the time dimension, then downsampled in one step.

avgthenresample

Much better. Faster, too. Matlab is pretty darned swift at convolution.