How to expose the moon?

Last night’s lunar eclipse occasioned a flurry of web traffic about how to set your camera to expose it correctly. I got to thinking – not always a good thing – about the problem, and the more I thought about it the harder it seemed.

Let’s assume that you’re making an image and you know the moon is the brightest thing in the field. Let’s make the further assumption that the moon is not large in the framed image; it’s only a component of the overall scene.

If you like to use your camera’s exposure meter – I don’t – you could set it to spot mode, meter the moon, and place it on Zone VII or (if you’re feeling lucky) VIII by opening up two or three stops from your meter reading. There’s a problem with this approach. Do you know that your camera’s spotmeter is taking its reading entirely from the moon, and not averaging in parts of the sky? If it is, it will think the moon is dimmer than it is, and you’re likely to have blown highlights in the final image. It’s actually worse than that; the light from the edges of the moon is dimmer than the light from the center, since the sunlight hits the edges at an angle, so maybe you should only open up a stop or two.

If you’re a fundamentalist photographer, you’ll note that the moon is a gray rock lit by the sun, and therefore, to place it on Zone V, or turn it into a middle gray, you’ll use the “Sunny 16” rule and set the f-stop to f/16 and the shutter speed to one over the ISO setting. Shooting digital, you don’t want a gray moon, you want an ETTR moon, so open up two, or, if you’re still feeling lucky, 3 stops. This is a pretty conservative way to go, since the moon’s reflectivity, at 12%, is less than an 18% gray card. This is the calculation that Ansel Adams famously muffed in exposing the negative of Moonrise, Hernandez, New Mexico. He did make a nice save, though.

The fundamentalist approach is useless during an eclipse, since you won’t know how bright the light falling on the moon is.

If, like me, you like to use the in-camera histogram, you could just make an exposure and look. If you’ve calibrated your camera’s settings using some variant of UniWB, the in-camera histo is a pretty good stand-in for the real raw histogram, and if you haven’t, you won’t blow the highlights, but you will probably not get a real ETTR exposure. However, there’s a fly in the ointment; the in-camera histogram is derived from the JPEG preview image, which is subsampled from the full-resolution sensor image. Unless the moon is reasonably large in the image, the subsampled JPEG is likely to omit the brightest pixel in the raw file. Even if it’s there, can you see one blown pixel on your camera’s histogram?

As far as I know, there’s no easy in-camera solution to getting a perfectly ETTR’d capture under the circumstances I’ve outlined here. You’ve got two choices: back off the estimated ETTR setting (probably the best move if you’re not fanatical about ETTR), or make a test image and look at the raw file (shooting tethered is a special case of this). A lunar eclipse takes long enough that that’s a viable option.

Or maybe you could slap on a long lens, make a test image, look at the in-camera histogram, and put your shorter lens back on the camera. Assuming similar T-factors in the lenses, that should work fine.

This may be a good example of analysis paralysis.

How much image quality is enough?

When we photographers capture images, how much quality should we strive for? A lot depends on how much we know about the eventual use of the image.

Why not just strive for the highest possible quality? Once you say that’s your goal, you’ve signed up for very expensive equipment, the use of a tripod almost all the time, a camera bag that’s too heavy to carry for any distance, and probably a big collection of lights, stands, reflectors, soft boxes, gobos, and the like. And maybe an assistant or two.

Not many of us want to go there. So we compromise. How much we should compromise depends on our objectives for the images.

If we’re shooting for the web, a very small sensor is all we need, if the image can tolerate the deep depth of field that goes along with that decision. If the light’s bright, we may even be able to get away with a cell phone.

If we’re making small portfolios – say 6×8 inch images– a micro four-thirds camera will probably do the job. We may or may not need a tripod. We might need lights, though.

The magazine market isn’t what it used to be. Neither is the book world. But let’s consider them anyway. Now we need to consider the intent of our images. Does the image need sharpness, smooth tonality, elegant shading, and lighting that pops? We’re probably talking full frame 135-style cameras, and maybe medium format. If we’re doing fashion or product work, bring on the lights, diffusers, and assistants.

If we’re selling prints, how much capture quality we need depends on the size of the print. I don’t buy the theory that people back up as the print gets bigger, so resolution doesn’t matter. I think the bigger the print, the more variation in viewing distance you get. Time and time again, I’ve seen people back way up so they can get the gestalt, and then bore right in so they can see the details. When you see somebody doing that to your work, you don’t want the whole thing to fall apart if the viewer is a foot away. Big prints need big sensors. Big sensors need big lenses. We probably need heavy tripods, too.

Thus, when we trip the shutter, we should have a pretty good idea of how big a print we’ll ever want to make from that capture. That’s a tall order. Maybe it’s impossible. Who knows the future?

Here’s an example of what I’m talking about. I just received an order for this image (click here if you want to learn more about it):

Betterlight_00165-Edit

The client wants a 60×60 inch print. When I made the image, I was thinking of large prints — maybe 30×30 — as a possibility, but I had not contemplated one that large. The image is a 6000×6000 pixel squeeze from a 64000×6000 capture. Hence, there’s plenty of information in the vertical, but only enough for 100 ppi in the horizontal direction, or 1/3.6 of what is ideal and about half of what I’ll usually tolerate in a large print. Fortunately, the image doesn’t rely on ultimate crispness to make its point, so I went back to the original capture, and resampled it to 21600×21600. At least that way I’ll get to take advantage of all the vertical pixels, even if I’ll have to make up some horizontal ones.

However, the available quality could have easily not worked at that size with an image that needed to scream crisp to get its point across. That’s an example of a larger point. The attributes of image quality that you should strive to… to what? Not to maximize; that’s what this whole post is about. To get to acceptable levels? That sounds so engineering-driven and heartless. Anyway, the attributes of image quality that your work needs to fulfill its mission are the aspects you need to concentrate on.

Trying to build too much quality into our images can lead to far fewer of them, as the cost and hassle of making pictures gradually overcomes our will to make art. Also, having more quality in the files than we ever use in the print is useless. Walking around with nothing better than an iPhone means small images, limited photographic options, and — unless a iPhone happens to be your thing — a restricted ability to communicate as an artist.

The title of today’s post is a question that’s easy to ask, but hard to answer.

A new gallery

I’ve made some changes to the gallery section of the main web site. Actually, Robin Ward, who writes all the web site code and does all the heavy lifting, made the changes, and I am thankful to her. Anyway, the slit scans that had been in the New Work gallery are now in a gallery called Timescapes. The New Work gallery has been given over to some stitched panos made in Maine and Quebec with a handheld M240 and the 50 ‘Lux, synthetic slit scans of NYC subway cars and soccer players, a few autohalftoned fire house images, and one lonely B&W semi-abstract.

You may notice that the subway images are dated 2011, and wonder how they get to be called new work. I date my images with the moment the exposure was made. These images were originally assembled manually using the visual language of the Staccato series. Last year, I reworked them with computer-driven techniques.

Here’s what I have to say about Timescapes in the artist’s statement:

For the last 25 years, From Alone in a Crowd, with its subject motion, through This Green, Growing Land and Nighthawks, which used camera motion, to Staccato, which stitched together little movies, most of my photography has been about movement in one way or another. Timescapes is explicitly so. In a normal photograph, the three-dimensional world is forced into a two-dimensional representation, with both dimensions representing space. In Timescapes, space is constrained even further, to only one dimension, and time becomes the second. Finish line cameras at racetracks work the same way. In this series, I examine what happens in a one-pixel wide line over a period of a minute or two to several hours.

Since the readers of this blog generally have a more technical bent than the viewers of my general web site, I’ll give you some technical details about how I did the work.

I started with a Betterlight scanning back on a Linhof or Ebony view camera. The back has a 3×6000 pixel sensor array that is moved across the image plane with a stepper motor. There’s a panoramic mode built into the Betterlight software. In that mode, the software expects that the camera is installed on a motorized rotary platform. It instructs the stepper motor in the back to position the line sensor in the center position, and leaves is there while it sends instructions to the motor in the platform to slowly spin the camera.

So how do I turn this back into a slit scan camera?

I lie to the software.

I tell it that the camera is on a rotary platform, when in fact it is stationary. Any changes to the image that the line sensor sees are the results of changes in the scene. From this simple beginning stems many interesting images.

Cleaning up sidecar files

My autohalftoning workflow has evolved to something like the following.

  • Write some code
  • Parameterize it.
  • Find some parameters that produce interesting results
  • Set up the software to do some ring-arounds
  • Import the ring-arounds into Lightroom
  • Delete all but the good ones
  • Manually remove the orphaned sidecar files.

The last step is not a lot of fun. I wrote some code to automate it:

sidecarcleanup

Making sidecar files

One of the issues I’m having to confront in the autohalftoning work has come up before in my image-processing programming, but I’ve always sidestepped it. When I write an image-manipulation program, I try to parameterize all of the options, rather than change the code to invoke them. It makes it a lot easier to go back to a set of procedures that’s worked well, and use that as a starting point for further explorations. The problem has been keeping track of what parameters are associated with any particular processed image.

Up to now, I’ve dealt with the issue by manually assigning file names that indicate the processing. There are several problems with that approach. First, I don’t always remember to include all the parameters, or think that a few will be obvious to me when I look at the file later. Also, the parameter descriptions get pretty cryptic because I’m trying to keep the file names short. And the file names get awkwardly long anyway.

Inspired by the way that Adobe Camera Raw keeps track of the processing the user has picked for a particular raw file, I came up with a better approach than arcane, manually-created file names: sidecar files. ACR doesn’t store the output images, and uses sidecar files so that it can perform the earlier processing in their absence. That’s not quite my problem. I am perfectly happy to store the output files, I just want to be able to look at a summary of the processing steps.

I wrote a method to write sidecar files with the same names as the processed images, but in Excel format, so that they have an .xls extension instead of a .tif one. Here’s the code:

sidecare code

There’s another advantage of this method. I can set up the autohalftoning code to do ring-arounds on any parameter or parameters I want. Then I can go through the images, find the ones I like, and look at the sidecars to see what the process was.

I recommend this approach to anyone rolling their own image processing code.

Adding a dc component to autohalftoning kernels

In addition to kernel size and construction, you can also get useful effects by making the kernel sum to numbers slightly larger than one. This means that the kernel is not strictly a highpass filter, but will preserve some low frequency information. You don’t want much; multiplying the center element of a fence kernel by 1.01 to 1.10 seems to be the sweet spot, but you can have a great deal of control over the look of your image this way.

If you’re into weirdness, you can make the center entry slightly smaller than one, which gives an effect reminiscent of a photographic negative.

I wrote a little Matlab code to tweak the kernel center value.

adjustCenter

And here’s what it looks like applied in varying amounts to an image:

fs-1fenceBottom13clipoffsetp05thrp0kc1ClipHighmpy108

fs-1fenceBottom13clipoffsetp05thrp0kc1ClipHighmpy102

fs-1fenceBottom13clipoffsetp05thrp0kc1ClipHighmpy105

 

 

 

Fence kernels for autohalftoning

I didn’t have any writing to do yesterday, since the blog was given over to prewritten April Foolery, so I messed around with convolution kernels. I came up with a class that seems to give interesting effects. Kernels in this class, which I’m calling “fence” kernels, are square with odd numbers of rows and columns. The center element is unity. All the other elements are negative numbers, scaled so that the average value of the whole kernel is zero – this makes it a high-pass filter. The other elements are all along the periphery, hence the name “fence.”

The least interesting one has the entire kernel fenced in:

5x5all

Fencing just one side produces micro-embossed effects that vary with the textures in the image:

5x5right

Fencing the bottom is similar:

5x5bottom

Fencing two adjacent sides works the best for the greatest number of images:

5x5both

I’ve shown the above examples as 5×5 kernels, but in actual use, the lowest I go is 7×7 and the highest 21×21.

Here are some examples of how a single image responds to fence kernel of various orientation:

fk-1fenceRight11clipoffsetm11cliphigh

fk-1fenceBottom11clipoffsetm11cliphigh

fk-1fenceBoth9clipoffsetm11cliphigh

 

Irrational aspect ratios

Since the dawn of digital photography, practitioners of the art have been barred from a creative freedom enjoyed by their image-making predecessors. Oil and watercolor painters, printmakers, and, yes, chemically-based photographers could make an image any shape they wanted to. Not so in the digital world. When cropping a raster image, a digital photographer must either leave an entire pixel in, or take it out. The result is that the aspect ratio of the cropped photograph is limited to the ratios of the integer dimensions of  a rectangular subset of the pixels in an image.

An example with a small image will make this clear. Let’s say that the original image is 3×4 pixels. The possible aspect ratios that can be cropped from this image are 3:4 (the entire image), 1:1 (3×3, 2×2, and 1×1 pixel crops), 1:4, 1:3, 1:2, 3:1, 2:1, 3:2, 2:3, and so on. 4:5, a common photographic aspect ratio, is simply not available. As the size of the original image, measured in pixels, increases, the choices grow, but it’s never been enough for the truly creative photographer.

As often happens this time of year, the inspired geniuses at Lirpa Labs, the wholly-owned research subsidiary of the Sloof Lirpa Corporation, have invented a solution to a problem that many of us didn’t even know we had. Some of Sloop Lirpa’s previous breakthroughs include the DED-backlit LCD and the 4×5 cellphone.

This year, the Lirpa researchers took a look at what they could do to increase the limited selection of aspect ratios available to photographers. They explored the obvious, such as increasing the resolution of images to gain a wider variety of ratios. That wasn’t good enough for the Lirpalites. True, the number of aspect ratios available through that stratagem is infinite. But mathematicians have hierarchies of infinities, and the set of rational numbers, which the set of aspect ratios you can get by scaling and cropping, is the lowliest of infinities, the countable kind.

There is a much larger infinity out there, the set of all real numbers. How could the Lirpa scientists make that entire set available to photographers, both as aspect ratios and as both horizontal and vertical dimensions? Pixels are integrally addressed by definition, so the number of pixels in each direction are integers, and the ratio of those integers must be a rational number.

The breakthrough came when the researchers explored the ramifications of the fact that pixels don’t have to be square. Their unsquareness is not itself the breakthrough; such pixels are used often in moving images. But the aspect ratio of the pixel has always been rational. The new Lirpa pixels can have any real number for their aspect ratio, and thus, images composed of these pixels can themselves have any real number for their aspect ratio.

And here’s the great thing about the invention; the new type of pixels can be entirely implemented through the addition of a few new metadata fields. Rather than indicating the pixel aspect ratio with a number, which, in order to be representable in binary form must perforce be rational, the new fields allow the specification of an algorithm for calculating the aspect ratio in a programming language created for the purpose in a manner similar to the PostScript language. In order to make the specification of some useful aspect ratios simpler, commonly used ones such as e, pi, and the square root of two are predefined and may be invoked by reference.

Well, there you have it. A host of new possibilities are now available to you. The engineers on the parent company, Sloof Lirpa, who are responsible for making practical the Lirpa Labs discoveries, are hard at work on creating displays and printers whose pixel dimensions are irrational. And the researchers, always staying at the forefront of human knowledge — and sometimes considerably beyond — have set their sights on complex aspect ratios.

Nonlinearities in autohalftoning

Before I get started with some of what you can do with autohalftoning, I need to say a few words about workflow. In most digital photography, you do all your creative work on the file at its original resolution, and resize and sharpen just before printing. By following that procedure, you are prepared to make prints of many different sizes with the minimum effort.

Not so with autohalftoning, or probably any other creative halftoning approach. This kind of halftoning has to be done at the end of the image editing cycle, after the printer and the image size has been determined. Since the halftoned image contains only a limited range of tones – only pure black and pure white in the ones I’m doing – you don’t want some resizing algorithm doing any interpolation and producing intermediate tones. Every time you make a different sized print, you have to start all over. That’s one more argument for doing the halftoning in a programming language that allows parameterizing operations so that they are appropriately tailored to the output image size, rather than trying to get Photoshop to do something it wasn’t designed for.

If you subtract two images in Photoshop, numbers less than zero are truncated, so the result is biased towards black. If you subtract two images in Matlab, nothing is truncated unless you want it to be. It turns out that the unclipped, linear images tend to be boring, and clipping is a reasonable way to bias the output. However, more flexibility than that provided by Photoshop is useful.

The first improvement is to allow the difference image to be clipped from the top instead of from the bottom. Clipping from the bottom produces a low-key effect:

fh15hpc0sigmam1offout

While clipping from the top gives a high-key look:

fhhikey13hpc0sigamma0offout

Another option is to change the clipping level so that it can occur at any number, not just zero. In order to make this tractable, I’ve implemented a clipping offset that is specified as a multiple of the standard deviation of the difference image. Numbers from 0.02 to 0.2 seem to give the best results so far.

Tomorrow, we will take a one-day vacation from autohalftoning, and this blog will be given over to a special product announcement.

Rolling my own autohalftoning

I’ve given up on Photoshop for halftone processing of the firehouse images. I can’t see what I’m doing because the software that goes from file resolution to screen resolution has a bias in favor of black over white, so the images look darker than they should when the resolution is set so they fill the screen. If I zoom into 1:1, Photoshop does just fine displaying the pixels as they are in the image, but then I can’t see the overall effect. Lightroom displays the images approximately as they will print, but having to save the file and look at Lightroom every time I wanted to check on the effects of a change in settings was just too cumbersome.

In addition, Photoshop’s tools aren’t well suited to this kind of image manipulation. I couldn’t find a way to precisely move one of the two out-of-register images. I replaced the curves that went on top of the two image layers with a threshold layer, but I couldn’t set the threshold to fractions of an eight-bit value, even though the images were in 16-bit form.

Then I thought about what was numerically going on in Photoshop, and I realized that there was an important processing step that I wanted to control, but Photoshop wasn’t going to let my near it without some contortions.

So I decided to do the whole thing in Matlab.

First off, what am I doing in general? It’s easy to get a handle on the specifics, but I find that I’m better able to see the possibilities if I can clearly state the general case. I call what I’m doing autohalftoning. Halftoning is the process of going from a continuous image to a binary one. There are many ways to do halftoning, the two most popular being screens (once real, now universally simulated), and diffusion dither. In both cases, some high-spatial-frequency signal is added to the image to assist in the conversion of the information from 8- or 16-bit depth (contone) to 1-bit depth (binary). So the “auto” part of my made up word refers to using the high-frequency information in the image itself for dither. I’m not too proud to add noise if I need to, but so far, if I’m careful to stay away from the regions where the camera’s pattern noise is amplified, there seems to be no benefit.

In Photoshop, I offset two versions of the same image from each other by a small number of pixels, took the difference, and thresholded the result to get a binary image at the printer driver’s native resolution, which I then printed.

In Matlab to do an equivalent operation, I convolve the input image with a kernel that looks like this:

5x5offsetkernel

which performs the offsetting and the subtraction simultaneously. To simulate the way Photoshop computes differences, I truncate the negative parts of the image. That’s a pretty crude way to introduce what could be a subtle nonlinearity, so I’ve developed more sophisticate extensions of that basic approach; more on that in another post. Then I threshold the resultant image at a number that relates to the content of the output of the convolution operation, and that’s the binary image.

Here’s what you get with the kernel above:

fh17offsetc0sigmam05offout

It looks dull res’d down to screen size, but good coming out of the printer on a C-sized piece of paper.

The advantages of working in Matlab are

  • Greater freedom in choosing the processing steps
  • Greater control over those steps
  • Perfect repeatability
  • Speed
  • Freedom from arbitrary bit depths; the interim images are all in double-precision floating point, and many of the processing variable are specified the same way.
  • The ability to do automatic ring-arounds to observe the effect of multiple variable changes
  • The ability to have the processing depend on the image content

Details to come.