Lens character

This is a somewhat inconclusive set of musings about what photographers call lens character, or the way a lens draws.

But before we get into that, I’d like to confess my biases. I’m an engineer by training, and I’ve spent most of my life either doing research, designing things, or managing others who do those things. I have an unromantic approach to camera equipment, and, maybe, and unromantic approach to life, that can be summed up in the following joke:

A pastor, a doctor and an engineer were waiting one morning for a particularly slow group of golfers.

Engineer: “What’s with these guys? We must have been waiting for 15 minutes!”

Doctor: “I don’t know, but I’ve never seen such ineptitude!”

Pastor: “Here comes the greens-keeper. Hi, Dave. What’s with that group ahead of us? They’re rather slow.”

Dave: “Oh, yes, that’s a group of blind fire fighters. They lost their sight saving our clubhouse from a fire last year, so we always let them play for free anytime.”

The group was silent for a moment.

Pastor: “That’s so sad. I’ll say a special prayer for them tonight.”

Doctor: “Good idea. I’m going to contact my ophthalmologist buddy and see if there’s anything he can do for them.”

Engineer: “Why can’t these guys play at night?”

Lens character is often described in the same kind of soaring rhapsody of metaphors that you find in wine newsletters and high-end hi-fi mags.

Breathtaking accuracy, a spacious soundstage, pinpoint localization, deep, powerful bass and thrilling dynamics…

A seamless classic, it offers a symphony of red and black currants, Asian plum sauce, lavender, and underbrush. Sweet Christmas fruitcake characteristics emerge from this magnificent creation. The seamless integration of acidity, tannin, wood and alcohol, the brilliant length and overall compelling complexity and richness make it one of the great classics from this historic estate.

It’s a naïve little domestic, but I think you’ll be amused at its presumption.

Zeiss is cooler, more clinical, more contrasty, lots of micro-contrast, punchier colors. Leica is warmer, more human, lower contrast, more natural colors.

To some extent, imprecise language is necessary in all three cases if the real speaker, wine, or image isn’t available. However, in photography, it’s usually pretty easy to supply an image or two that illustrate what the words are trying to describe. This is done with what I consider remarkable infrequency. I don’t know what the problem is. It’s not like wine, where it’s not practical to include samples with the newsletter (wouldn’t that be nice!), or hi-fi where the mag can’t ship the whole setup to your house so you can experience what they’re talking about. On the ‘net we may not be able to simulate printed output, but we can surely attach screen images at whatever resolution we desire.

There is one difference between wine and the other two cases, and I think it’s the beginning of a better way to think about lens character. There is no one perfect wine, a wine that all the wines in the world are trying to be. Each wine is an expression of what can be done with particular fruit grown in particular places in particular growing seasons. So a description that includes positive and negative metaphors is appropriate, if we can agree on a vocabulary and samples are unavailable. However, anyone who has purchased wine based upon what the seller — or even a disinterested third party – had to say in a newsletter knows that there is many a slip between the description and the lip.

But with hi-fi and lenses, there is an ideal that is out there. With hi-fi, it’s the original sound field, assuming there was one. With lenses, it’s the perfect diffraction-limited optic that brings all wavelengths to a common focus with no distortion, no aberrations, and has no departures from perfection in its rendering of out-of-focus objects. Notice what’s happening here. I’m describing the perfect lens mostly in terms of what it doesn’t do.

And that’s what I think lens character is all about. I think a lens’s character is the totality of its departure from perfection. I further think that the purple prose that gets applied to what some consider to be highly desirable lens character is the description of endearing errors.

That doesn’t make all such talk silly. Perfect lenses are impossibilities. Nearly perfect ones are impractical. You wouldn’t be able to afford one, much less a bag full. Lens designers make tradeoff between types of errors. Some of them bias the tradeoffs in the direction of maximizing one set of criteria. Others pick different things to try to get right. Things not on the list to optimize don’t get optimized.

Photographers want different things out of their lenses. The things that are important to me vary with the subject, the lighting, and my intended use of the images. Lens character that may be a plus in one situation could be a disqualifying drawback in another.

But let’s not get carried away with the descriptive poetry when we talk about lens character. Let’s focus on what the departures from perfection are, and how they fit the intended use of the lens.

Why can’t those guys play at night?

Two more from Death Valley

Two that are a bit more whimsical than yesterday’s:

Siesta time at Stovepipe Wells Airport

Siesta time at Stovepipe Wells Airport

Tourist snaps in the cold wind at Ubehebe Crater

Tourist snaps in the cold wind at Ubehebe Crater

And here’ are a few from Monday morning, back home:


[Group 1]-_DSC6919__DSC6937 (2)-17 images_0001-Edit


Shooting stars, bridle path, and trees

Shooting stars, bridle path, and trees

Self portrait

Self portrait

Death Valley Days

I went to Death Valley for the weekend. I took an IR-modified Sony a7, and an unmodified a7II, plus three lenses: the Coastal Optical 60mm f/4, the Nikon 28mm f/1.4 D, and the Zony 35mm f/2.8 FE.  I ended up using the Nikon 28 on the IR camera almost all the time.

I placed a spare battery and a charger next to my case as I was packing, but I must have forgotten to put them in, because they were nowhere to be found when I got to Furnace Creek, and when I got home, there they were on the desk.

The Sony alpha 7 cameras have a fearsome reputation of chewing their way through batteries, so I was worried.

I needn’t have been.

I made more than 1500 exposures on the IR-modified a7 the first two days, and checked the camera’s battery on the morning of the third: 52%. I swapped the nearly-full battery from the a7II in, then made about 300 exposures on the last day. When I got home, the camera said the battery was 82% charged.

I am surprised and pleased. I was shooting panos, had auto review turned off, and only turned the camera on when I thought I might want to make an exposure. I’m sure all that contributed to my results. The first battery was a  Wasabi 1.3 amp-hour unit, and the second a Sony 1.02 amp-hour one.

Here is a first cut at some of the pictures.

20 Mule Team Canyon

20 Mule Team Canyon

Dante's View. A new twist on an old chestnut.

Dante’s View. A new twist on an old chestnut.

Sandstorm, rain showers, and clouds

Sandstorm, rain showers, and clouds

Self=portrait, 20 Mule Team Canyon

Self-portrait, 20 Mule Team Canyon

On the road again.

On the road again.

Ubehebe Crater

Ubehebe Crater

Oasis. Furnace Creek.

Oasis. Furnace Creek.

And one I-was-here-and-so-was-the-rainbow record shot:

[Group 1]-_DSC0121__DSC0129-9 images_0001-Edit

D810 live view heating effects at 30 second exposures

A few days ago, I posted some graphs that indicated that using live view on the D810 had no material effect on dark-field noise at 1/2000 second shutter speeds. Yesterday, I repeated the tests at 1/30 second and 1 second shutter speeds, with similar results.

I’ve received a request to do more testing at really long shutter speeds. 30 second is as long as the D810 will do without special tricks, so that’s what I picked.

Here’s the protocol. In a 68-degree F (20 degrees Celsius) room, I set a D810 up in manual mode, with 14 bit raw file precision. I set the ISO to 800, which is the highest ISO on the D800 where there is no clipping of dark-field images. I set the shutter to EFCS at 30 seconds, the aperture to f/16, the shutter mode to single shot, and the exposure delay to 0. With the lens cap on, I made a series of several exposures with live view off, starting each exposure immediately after the preceding one ended. I shut the camera off for half an hour, then I made another series about a minute apart with live view on. Thus, in the second series, live view was on for 30 seconds, then the shutter was open for 30 seconds.

I analysed the files in RawDigger, both for almost the entire frame, and for a 200×200 central area, averaging the standard deviation of the captures for all four raw channels.

The results:




The effect of the heating induced by the use of live view on the dark field noise is greater than at slower shutter speeds, but it is certainly not dramatic. In fact, I consider it fairly small.

There appears to be virtually no self-heating-induced increases in dark field noise from simply having the shutter open and the sensor collecting light for ling periods of time.

D810 live view’s effect on dark-field noise, longer exposures

A few days ago, I posted some graphs that indicated that using live view on the D810 had no material effect on dark-field noise. Several people have expressed interest in seeing the test repeated at longer shutter speeds.

I’ll repeat the protocol. In a 68-degree F (20 degrees Celsius) room, I set a D810 up in manual mode, with 14 bit raw file precision. I set the ISO to 800, which is the highest ISO on the D800 where there is no clipping of dark-field images. I set the shutter to EFCS at 1/30 second, the aperture to f/16, the shutter mode to single shot, and the exposure delay to 0. With the lens cap on, I made a series of several exposures with live view off, and another series about a minute apart with live view on. I repeated the test with the shutter speed set to 1 second, with long exposure noise reduction off.

I analysed the files in RawDigger, both for almost the entire frame, and for a 200×200 central area, averaging the standard deviation of the captures for all four raw channels.

The 1/30 second results:



Pretty much what we saw at 1/2000 second.

At one-second:



Well, that’s interesting. The effect is actually less at one second. That’s because the D810 has some non-defeatable long-exposure processing.



NEX adapters that are the right length

A year or so ago, I wrote this post complaining that lens adapters were always too short. Then, in this post, I reported that, at least in the case of Novaflex, they were too short by design. A week or so ago, in this post, I told you all that I had received two adapters from Kipon, and that they were the right length.

I have since received three more Kipon adapters for Sony NEX/alpha 7 cameras, for a total of five:

  • one for Nikon S lenses
  • two for Leica R lenses
  • one for Leica M lenses
  • one for Nikon F lenses.

All of them are the right length, based on testing with many lenses and two cameras, to the following criteria:

  • All lenses can focus to infinity.
  • When all lenses are focused to infinity, the distance indicator on the lens is close to the infinity marker on the rotating barrel.

In addition, the Kipon adapters appear to be well made, and fit tightly, but without excessive torque required, at both the camera and the lens end. They have a raised red plastic bump on the camera side — similar to the one on Leica M lenses — so that you can bayonet the lens in by feel alone.

They’re not that easy to find, but if you do a search on Amazon you’ll find some. I got mine from a camera store in Japan, and the service has been fine. As a bonus, they are substantially cheaper than the Novoflex adapters.

How much do you do in post?

I made infrared tree exposures Saturday and Sunday, and spent most of yesterday stitching and editing the images.

In photography, there are two extreme views of post processing. The first is that it’s all over when you release the shutter. Get that right and you don’t have to do anything else. In the film era, if you thought that way, you shot slides, and you looked down on people who needed to do darkroom work to get good images. You thought it was sloppy photography to improve the image after the fact.

Today, people who are hard over in that direction probably shoot JPEGs, and don’t own a copy of either Lightroom or Photoshop.

Then there were the people for whom the negative — and, if you thought that way you made negatives — was just a jumping off place. Compositing, weird toning, printing through screens, putting Vaseline on the enlarging lens, it was all fair game. William Mortensen was a famous practitioner of this kind of photography.

Ansel Adams had a foot in both camps, and expressed his perspective eloquently: “The negative is the score, and the print is the performance.”

When you stitch, you’ve already headed a long way to the do it in post direction, but I’m going further than when I started this series. At first, I’d made a set of images and stitch the whole set. Now, I’m making more exposures than I think I need for each set, and trying a lot of different combinations in post. I think I’m getting better results, but it makes for long editing sessions.

Yesterday’s keepers:

A 34-image stitch:

[Group 2]-_DSC3541__DSC3574-34 images_0001-Edit


A three-image stitch:

[Group 3]-_DSC3611__DSC3613-3 images_0000-Edit


Working with the foreground:

[Group 2]-_DSC3541__DSC3557-12 images_0000-Edit


A little free-form framing:

[Group 8]-_DSC4199__DSC4262-64 images_0001-Edit


Dawn, with a storm on the way:

[Group 2]-_DSC3850__DSC3901-52 images_0000-Edit

[Group 3]-_DSC3905__DSC3949-16 images_0000-Edit


Does using live view make your images noisier?

It has been speculated that using live view, because it heats up the sensor, will add to the shadow noise in images, and should be avoided. An extreme twist on this point of view says that you shouldn’t use mirrorless cameras because, with the exception of the M240, their live view is on all the time, wrecking havoc on your shadows.

That didn’t sound right to me.

In a 68-degree F (20 degrees Celsius) room, I set a D810 up in manual mode, with 14 bit raw file precision. I set the ISO to 800, which is the highest ISO on the D800 where there is no clipping of dark-field images. There happened to be an Otus 85 on the camera. I left it there, secure in the knowledge that the dark field images would be of very high quality. I set the shutter to EFCS at 1/2000 second, the aperture to f/16, the shutter mode to single shot, and the exposure delay to 0. With the lens cap on, I made a series of several exposures with live view off, and another series about a minute apart with live view on.

I brought the images into RawDigger, and selected nearly the whole frame.

Here’s the histogram of the first image:



The gaps in the red and blue pixels are due to Nikon’s white balance prescaling.

I plotted the average standard deviation (aka sigma) of all four channels vs exposure number. Here’s the graph:


The vertical axis is the standard deviation measured by the ADC count. You can see that the self-heating introduced by live view is in evidence. You can also see that the effect of that heating is tiny.

If we look at just a 200×200 pixel central sample, we can see an even smaller effect, indicating that the main component of the heat-induced live view noise is pattern noise;



Learning on the job

No Battle Plan Survives Contact With the Enemy

Thus spoke German military strategist Helmuth von Moltk. He was apparently right about war. The obvious corollary certainly applies to photography.

And thus it is with my infrared trees series.

Let’s start with lenses. I started out with the LifePixel-modified Sony alpha 7, the Coastal Optical 60mm f/4, the Leica 28mm f/2.8 Elmarit-R, and the Zeiss 15mm f/2.8 Distagon ZF.2.

The 15 is too wide for most things:

[Group 1]-_DSC1422__DSC1450-22 images_0000-Edit

The Coastal Optical is an especially nice lens when used with the LifePixel SuperColor filter, which passes both blue light and IR, since that lens can bring both bands to a focus at the same point. It is also very sharp, although that turns out not to be too important given the next point. I’m finding myself thinking pretty wide in this series, and thus I have to stitch a lot of images with the 60 to get one pano. This picture took 68:

[Group 2]-_DSC2645__DSC2718-68 images_0000-Edit

The 28 seems to be the most generally useful focal length of the three that I’m using. However, I have a problem with the Elmarit-R. I shoot into the sun a lot in this series, using tree branches to partially, but not completely, shade the lens:

[Group 4]-_DSC1485__DSC1507-21 images_0000-Edit

Under these circumstances, the Elmarit-R often has artifacts that are a lot of trouble to clean up. I am switching to the Nikon 28mm f/1.4 D, which is not as prone to this kind of thing.

I have been experimenting with adding visible light blocking filters in front of the lens, especially when working with the 28mm lenses. This serves to limit the bandwidth of the light that falls on the sensor, and makes it easier for the lens to achieve good focus without stopping down a lot.  I usually use the moderate cutoff wavelength provided by an R72 filter. However, now there’ s less light, and, at the ISOs a feel comfortable using, the shutter speeds are getting a little long. In order to deal with that, I bought another a7II, and sent it off to LifePixel last week for the installation of their standard IR filter. With the IBIS in that camera, I should be able to confidently use 1/15 second with the 28.

I know I said in an earlier post that I’d be ordering my IR cameras with an all-pass filter and providing filtration in front of the lens to achieve greater flexibility. I’ve since found out that, shooting into the sun the way I do, that any filter has a tendency to aggravate and flare or artifact problems.

On the stitching front, I had been using AutoPano Giga exclusively. However, I had some trouble when I was dealing with the Elmarit-R’s artifact that caused me to try PTGui instead. I like PTGui. I like its masking functions better that AutoPano’s. However, it needs more help to stitch to my satisfaction, so I’m concentrating on not having the artifacts in the first place, and have gone back to AutoPano in the main.

I started out trying to frame the sets of images in a regular fashion, relying on the fact that I was doing it all handheld to provide the irregular edges. Then I started deliberately changing framing and angles as I made the exposures. I now do that to some extent, but I’m relying on overshooting by a factor of two or three the number of pictures I need to get AutoPano to do a good stitch, and editing the image set in post to get interesting edges, and to have the right relationships between the objects in the image and the edges of the image.

I am not deleting the raw files, thinking that I may want to go back to them at some point. The downside is that, as I get more and more of them, it’s going to be harder to sort the wheat from the chaff. Lightroom is currently not much help here. There are rumors that the next version of Lr will have panorama capability. That might help a lot.


Screen resolution for print simulation

We saw a couple of days ago that viewing 36MP images on-screen at 2:1 exposed flaws that were invisible in 19×12.67 inch prints from an Epson 4900 on Exhibition Fiber paper, even upon close inspection with reading glasses, but not with a loupe. I need to add one more specification into the mix: the monitor that I’m using for this test is the NEC PA301W, which has 2560×1600 pixels in a display that’s just shy of 16 inches high, for a pixel pitch of just under 100/inch.

Caveat: to really do this thing right would require multiple subjects, careful experiment design, rigid control of lighting and viewing distance, and lots of other things that you’re not going to find here. If you’re looking for good science, move on. If you’re looking for a rough idea and maybe some ideas for doing your own experiments, than you may get something out of this post.

The experiment of the previous post brings two questions to mind:

  • What is the right screen resolution to use on my monitor with my eyes to simulate the sharpness of a 36 MP image as a 19×12.67 inch print?
  • How would that resolution scale to other camera resolutions and print sizes?

Here are screen shots of comparisons at 1:1 (using PNG file format):

f/2.8 1:1

f/2.8 1:1

f/5.6 1:1

f/5.6 1:1

And at 1:2:

f/2.8 1:2

f/2.8 1:2

f/5.6 1:2

f/5.6 1:2

The left image in each pair is the one from the AF-S 85mm f/1.4 G, and the right one is from the Zeiss Otus 85mm f/1.4.  Nikon Click the images and set your browser to 1:1 to see them properly.

For reference, here are the scans from the prints:





The Zeiss image is the top one and the Nikon the bottom one.

My take is that the right screen magnification is between the two examples I posted about, say about 1:1.5. Lr doesn’t offer that magnification. I have the advantage that I am looking at actual printer output and uncompressed images, but I hope that you conclude something similar.

The right screen resolution should scale with print size, assuming that all prints are closely inspected. That means that 1:2 would be a good screen resolution to simulate what you’ll see in a 13 inch wide print, and 1:1 would simulate a 26 inch wide print fairly well.

The right screen resolution should also scale with camera resolution. That means that, for our 19-inch wide print, 1:1 would simulate what you’d see with an 18 megapixel camera, and 1:2 would give you an idea of what the print would be like if you have a 72 MP camera.

With matte paper, you won’t see as much detail in the print as with Epson Exhibition Fiber or another glossy paper, so you’d want a coarser screen resolution to simulate what you’d see on the printer. I haven’t done any testing on matte paper, but I’d start with screen resolutions a factor of two coarser than the ones above, or 1:3 for a 14 inch wide print from a 36 MP camera.

If your monitor is coarser or finer than my 100 ppi one, you’ll want to make some adjustments.

This is all a bit of guesswork, but I hope what you’ve read above may provide a good starting point for your own experiments.


Photography meets digital computer technology. Photography wins — most of the time.

Entries RSS
Comments RSS