Technical aspects of street photography

I’ve been participating in a discussion about gear for street photography. My position has been, and is, that you can do street photography with just about any gear, though some kinds of street work favor some kinds of gear. The discussion has broadened a bit beyond the – for me, not particularly interesting – subject of gear to a subject that I think is much more important: just how do you do street photography successfully?

Imacon Color Scanner

This post is going to be about how I do street photography, or rather, how I used to do it. I think that street work is a young person’s game, and I am definitely on the creaky side of life. I did most of my street work in the 1970s through the 1990s, which was an era in which cameras were not met with the hostility, fear, and suspicion that seems to accompany them in cities today, and one in which law enforcement and private guard services were not so quick to bar photographers from making images, regardless of what the law actually says.

Imacon Color Scanner

Before I get started, I’ll need to define street photography. You could just say it’s photography done in the street, and I’d have a hard time arguing with you, but I’m going to use a narrower definition here. What I’m talking about is people-centric work. The people are strangers to the photographer, and they are not posing, and certainly not posed by, and often unaware of, the photographer. The places are outdoor public spaces, or occasionally large covered spaces. If photographer-controlled light is involved, and it usually isn’t, it is minimal, like an on-camera fill flash. Street photography also has a traditional feel to it. Technical quality is not as important as in, say, landscapes. Tilted horizons, blown highlights, grain/noise, out-of-focus blobs, and awkward framing are all acceptable, even desirable to some.

Imacon Color Scanner

Henri Cartier-Bresson is probably the archetypal street photographer, but there are many who plowed that field. Robert Frank, Garry Winogrand, and Lee Friedlander are also exemplars of the genre. You could argue that much of Weegee’s work was street photography, but it is atypical, and seems to me to be more reportage (there is a broad overlap; much of Cartier-Bresson’s early work was straight reportage). Walker Evans is not a dead-center street photographer, either, opting for more distance and a more formal quality than is typical.

Imacon Color Scanner

In the 1970s and 80s, I attempted traditional street photography with some little success. Towards the end of the 80s that evolved to the work in Alone in a Crowd, which started within the street photography tent, evolved into something more formal, and at the end, returned in some, if not most, respects.

Imacon Color Scanner

How did I go about making street photographs? First off, I’m going to tell you how I didn’t operate. There is a widely-held belief that, in order to be a good street photographer, you have to react rapidly. Maybe that’s true for some, but not for me. For me, the key thing was that you have to be ready to make the exposure all the time. Then, when something happens, it’s just a matter of raising your camera to your eye and tripping the shutter. You don’t have to do that especially quickly. In fact, you don’t want to do it fast, because rapid motion sets off alarm bells in people.

42nd Street Station, NYC, 1992

Okay, how do you make sure you’re ready to make an image?

The zeroth thing is that you need to be paying attention all the time. There’s a mental state that I used to get myself into when I was doing street work. It involved trying to be open to everything going on around me. It was difficult to get there immediately, but after half an hour or so, I could feel it happening. And here’s a bonus. It’s quite a pleasant state of mind – calm, clear, and centered.

Hailstorm, National Monument (to the Napoleonic Dead), Edinburgh

The first thing is that your camera needs to be in your hands. Not in a case. Not slung over your shoulder. Not hanging around your neck like a necklace. Ready.

Imacon Color Scanner

The second thing is focusing. Your camera needs to be preset to the right distance. That means that you’re always thinking about what might happen and adjusting the focus distance. This is best done with lenses that have clear distance markings and short focus throws, like the lenses made for rangefinder cameras.

In addition, you want to think not just in terms of the point of best focus, but of a range of acceptable focus. This is facilitated by using a lens that has large, clear depth of field markings on the barrel. Most lenses made for rangefinder cameras qualify. Leica R-mount lenses are often good, too. The DOF markings on the ring are mostly likely computed for a circle fo confusion (CoC) of 30 micrometers (um). That’s a lot of blur by modern standards If you want to be really conservative, to get a CoC of 1.5 pixels with a 42 MP camera, or about 7.5 um, we need to use the ticks four stops wider than the aperture we’ve set. If we set f/16, we use the f/4 ticks. If we set f/11, we’d use the f/2.8 ticks. Probably, you’re better off with something in between. I generally used 2 or three stop DOF derating in the street.

You’re always fiddling with your camera. It soon becomes second nature to think about what might be happening next and adjusting your focus accordingly.

Imacon Color Scanner

The third thing is exposure. You don’t want to be messing with the aperture, since you picked something that would give you the DOF you want when you figured out where to set the focus. That leaves shutter speed, and here the digital and chemical photography worlds diverge, at least as of 2016. In the film world, you were always worried about camera or subject motion if the light weren’t bright. In the digital world, with a camera that’s quasi-ISOless, you pick a shutter speed that won’t clip the highlights in the brightest conditions that you expect to encounter, and leave the camera set to the same shutter speed as the light drops by as much as four or five stops. Then you change the ISO by four or five stops, and start over. When I’m operating in this mode, I often set the ISO knob of the a7RII to either 100 or 640, which are the two minimum-analog-gain settings for each of the camera’s conversion gain.

Imacon Color Scanner

Imacon Color Scanner

You can also put the camera in aperture auto-exposure mode, and use the exposure compensation knob. That’s more like the film situation, and I find it less satisfying, and more likely to get you into a situation where, because the shutter speeds are dropping, you need to rethink your chosen DOF strategy.

That’s it for the technical side of street photography, at least as practiced by me. Next, the where and when.

A book report — spot color proofing

This is part of a series about my experiences in publishing a book.The series starts here.

In my last post, I reported on my worries that the captions and footers weren’t sufficiently legible in the Matchprint proofs. I finished reviewing the proofs, and dropped them by Jerry’s offices today. We talked about the captions.

I suggested that they be made darker. He thought that they should be darker than they looked in the Matchprint. We then suspected that the Matchprint wasn’t doing a good job of matching the spot color on the printed page.

Jerry pulled out a Pantone swatch. We compared it to the Matchprint. It was darker than the Matchprint.

Then we pulled out the press proofs. There was a large swath of the spot color there. We compared it to the Pantone swatch. They matched. So the Matchprint was off. And not just a little.

I think the captions are going to be just fine.

A book report — Matchprints & mechanicals

This is part of a series about my experiences in publishing a book.The series starts here.

I met with Jerry Takigawa yesterday, and he showed me several things that Hemlock had provided:

  • A complete mechanical proof of the book, with low resolution and inaccurate color.
  • A mechanical proof of the dust cover, showing how the French fold will work, and the die-cut rounded edges that I added to keep the corners from snagging.
  • A set of all the book pages printed using the Kodak Matchprint contract proofing process (Kodak licensed the name from 3M some time ago, and the current version has nothing to do with 3M’s four layer technology.
  • A Matchprint proof of the dust jacket
  • A set of proofs showing the locations where the varnish will be added.
  • A proof of the dust jacket that had been laminated with the same material we’re going to use of the real dust jacket.

Basically, things looked pretty good. Jerry and I agreed that he’d find someone to do one last proofreading pass; nothing like having a thousand copies of some oopsie that makes your face red every time you see it.

The die-cut rounding on the dust jacket looks like it will do what I had hoped. Jerry was a little worried that it was too close to my headshot, but I think it’ll be fine.

I checked the mechanicals briefly, then asked Jerry to get them to the proofreader. I took the Matchprints home.

Today, I went over the Matchprints. There were three images that needed some more work. On one of them, an area near the top that looked of on the crop that I normally use looked weird when the image was set up for a full-bleed page. I cropped it in Lr, then converted it to CMYK. There were two images that just lost their oomph in the Matchprint at the size they were going to be in the book. I fixed them in Lr, and converted them to CMYK.

My next worry is the spot-color type that we’re using for captions and the footer. The text of both of these is pretty small, and it’s hard to read when it’s gray. We’re currently using a Pantone black/white ink mix at 50% of each. I asked Jerry what he thought about taking that up to 60% black, 40% white. That will affect some other areas of the book where Jerry is using that spot color as a graphic element. It will also affect, in a good way, I think, a few places in the book where there is white text against that spot color used as a background.

I also asked Jerry if we could loosen up the leading in the captions to make them more readable.

There were a couple of places where the text in the footer looked like the letters had different weighting. I expect that this is due to the Matchprint’s inability to precisely simulate spot colors, but I asked Jerry to talk to Hemlock and see what they thought.

 

Depth of Field summary — part 2

This is a continuation of a report on new ways to look at depth of field. The series starts here:

A new way to look at depth of field

The shape of the curves. Let’s take a closer look at the DOF curves for the simulated Otus lens focused at three meters that I showed in the first part of this summary:

3m focus2

I already remarded on how narrow the really sharp parts are at wide apertures. Now I’d like to direct your attention to the shape of the curves. They are in general, bell-shaped, like the probability density function of a Gaussian distribution. From a DOF perspective, that means that there’s a region where the curves are flatter near the focus point meaning things are pretty sharp. If you get away from that region, the curves fall away steeply, meaning the image gets real soft real fast. And finally, when the object that you’re looking at gets quite far from the point of focus, distance stops making so much difference in the sharpness. That’s the good news. The bad news is that the image is getting pretty darned soft by then.

Sharp lenses have less relative DOF at the same f-stop, but there’s a workaround that reverses the situation. Consider these three sets of curves with three different lenses focused at 3 meters:

DIffraction-limited lens

DIffraction-limited lens

Otus aberrations

Otus aberrations

Nikon aberrations

Nikon aberrations

The top set of curves is for an ideal, diffraction-limited lens on a Sony a7RII. The next set down is for a 55mm lens with the same aberrations as the 85/1.4 Otus. The bottom set is for a 55mm lens with the same aberrations as the 85/1.4G Nikon.

You can see that, at the wider apertures where diffraction isn’t the long pole in the tent, that the better the lens, the peakier and narrower are the curves. I wish it weren’t that way, but if you wan the sharpness your fancy lens was made to deliver, you’re going to have to find flatter subjects than you would if you were using a lesser lens.

Here’s the workaround. Just stop down a bit more. You can see that the diffraction limited lens at f/5.6 has more DOF than the other lenses at apertures that are nearly as sharp. It’s a close call, but it looks like that’s true at f/8, too.

The same kind of thing is true for the Otus and the Nikon, considered as a pair. The Otus at f/8 is almost as sharp as the Nikon at f/4 and f/5.6, and has more DOF.

Of course, the downside is that you don’t get the center sharpness you’d have gotten if you opened the good lens up farther.

OOF LED test won’t detect tilt

A while back, I posted the following simple lens decentering test, using a out-of-focus LED:

Simple decentering test

There have been reports of this test not detecting lenses whose field curvature is not symmetric about the lens axis, but has a til component. There is no reason to think that the OOF LED test can detect tilt, but I thought I’d run a test anyway.

I mounted a Nikon 24mm f/3.5E PC-Nikkor on a Sony a7rII, opened the diaphragm up all the way, and made three OOF LED images with the lens centered, tilted 5 degrees right, and tilted five degrees left.

Here are the (cropped) images:

_DSC1699

_DSC1703

_DSC1701

You can see that the OOF image is shifted, but not distorted, by the tilting.

And, by the way, I don’t see any reason why the OOF LED decentering test would flag as bad a lens all of whose elements are decentered by precisely the same amount in precisely the same direction.

 

Depth of field summary — part 1

This is a continuation of a report on new ways to look at depth of field. The series starts here:

A new way to look at depth of field

I’ve been derelict in publishing a summary of all my work of depth of field. I apologize. The reason is that I find it difficult to see the forest here, although I sure understand the trees. I’m just going to plunge in and see where it goes.

My first conclusion about DOF is that, with modern cameras and lenses, there just isn’t much of it. Consider these curves for our simulated 55mm f/1.4 lens with the same aberrations as the Otus 85 f/1.4 mounted on a simulated Sony a7RII and focused at infinity.

otus inf

At f/1.4, we’re starting to see some degradation with the object of interest at about 800 m. At f/2.8. it’s more like 300m. If you’re more or less critical, these numbers could change. Even at f/11, it’s close to 50 meters. Yikes!

The way to take this chart and use it for other focal length lenses than 55 mm is to calculate the following number:

(f/55)^2

where f is the focal length in mm of the lens that you want to use.

With a 200mm f/2.8 lens, that’s as good as the Otus 55, the multiplier is 3.6. Thus the distance where you begin to lose subject sharpness with the lens focused at infinity is no longer 300m; it’s more like 1100m! With a 500 f/4, the multiplier is about 80, and we multiply the 55mm distance of about 200m by that to get — wait for it — 16 km! I guess it’s a good thing in — a perverse way — that 500mm lenses aren’t as sharp as Otus ones.

Another thing I’ve discovered is: because of lens aberrations, sometimes lenses used near wide open have more DOF that you’d think. 

Take a look at this graph, for a 55 mm lens focused at 3 meters.

3m focus

Let’s look a little closer:

3m focus2

Now you can see that, in terms of loss of peak sharpness, that f/1.4 and f/2.8 have about the same DOF.

The same is true of diffraction, in theory. It gives narrow apertures more DOF than you’d see without it. However, that’s a little harder to see on these graphs because the defocusing DOF is so great. I’ll turn off the diffraction simulation and run the immediately-above curve again:

3m focus3 no dif

Virtually no difference.

Note the tiny amount of DOF at the wide apertures of f/1.4 through f/2.8, Maybe 50 mm out of 3 meters! And 3 meters isn’t even particularly close for a 55 mm lens.

What if we get to intimate portrait distance, say 1 meter?

1m focus

 

At the three widest f-stops, we’re talking about 5 mm or so of DOF!

This has implications for focusing, and for planning. Focus stacking doesn’t look so much like a corner-case solution. Too bad you can’t focus-stack a portrait.

A book report — press proofs

This is part of a series about my experiences in publishing a book.The series starts here.

I’ve reported previously on my attempts to proof the images for the book on my Epson 4900:

A book report — hard proofing

There have been enough confusing communications about the images that I created for the book in Coated GRACoL 2006 (ISO 12647-2:2004) CMYK that I wanted to get positive confirmation that the images were going to be right. The gold standard is a press proof, in which the printer makes proofs with the same paper as will be on the book, on the same press that will print the book, using the came inks and varnishes that will be used. It costs a couple of thousand bucks, but I really didn’t want to end up with a garage full of books with images I didn’t really like. That would be a real pickle, because I couldn’t hold my head up if people thought those were my best work, and thus I’d be stuck with the books. Or, almost as bad, catching the problems on the press during the actual printing, calling it off, and eating a lot of chargebacks.

Yesterday, I went down to Jerry’s office to review the proofs. I’d picked the most problematic images, and Hemlock had laid out a full-sheet (28×40 inch) page with those images, the front dust-cover, part of a text page, and a section of one of the saturated black intersectional pages.

When I got to Jerry’s office, the first thing he showed me is what Hemlock calls the “MatchPrint” page. This is not a real MatchPrint, which is a 3M trademark for a process they developed in the 1970s  for simulating press output with no printing press, and has the dot structure of an image off a press. 3M has sold teh trademark to Kodak, who uses it to brand their inkjet proofing process. So the Matchprint that Jerry showed me is kind of like the proofs that I created using my Epson 4900 and some color transforms, but with the stamp of approval of Kodak (or whoever Kodak sold it to; I can’t keep track). Hemlock had used a fairly glossy paper. I suspect that they put the same paper in the MatchPrint printer no matter what coated stock they’re printing on.

The MatchPrint looked a lot like my proofs, but, since it was glossier, the maximum density (Dmax) appeared to be higher. I didn’t take my densitometer to Jerry’s office, so I don’t really know for sure, but I did take some of my proofs and compared them to the MatchPrint side by side.

Already my confidence was bolstered. The two sets of inkjet prints were very close.

Then I looked at the press proofs. They were all printed using the same inkset, but one had no varnish at all, one had a varnish over the whole page, and one had what we thought we wanted,  which was an ultra-violet curing spot gloss varnish just over the images and the black section separators. The no-varnish pages looked dull. We expected that; had we gone with no varnish, we’d have used a paper finish with more gloss.

The varnish-all-over images looked better, but still too dull. The spot varnished (no varnish except where there’s an image) page was the best. I though the images were still a little duller than would have been ideal, but they were definitely acceptable.  They only looked dull when compared to the MatchPrint and my proofs. The colors were very good. There were some differences between the press proof and the inkjet ones, but they were small, and most of the time, I actually liked the press proof colors better.

We are using one spot color, a Pantone 50%ish gray that we use for some pages opposite images and almost all the text — black would be too distracting near the images — and we had lots of that to look at in the press proofs. I had worried about the text being too small to read comfortable with the reduced contrast of the gray ink, but it was fine.

When I got the press proof home, I measured the Dmax. The unvarnished areas measured 1.74, and the varnished ones, 1.71 for the supersaturated (so called “rich black”) pages. I know that’s kind of strange, but the varnished blacks looked darker than the unvarnished ones, so I’m not going to worry about it.

It is true that the blacks in the press proofs measure quite a bit less than Hemlock’s advised 1.88. I’m putting that down to the paper. Although I never got a straight answer out of Hemlock on teh paper they used to make that number, I’m guessing it was pretty darned glossy, so as to show their work in the best possible light.

Next big event is an inkjet proof of the whole darned book.

How to decrease photon noise

Sometimes pithy photographic explanations, although valid and meaningful for the cognoscenti, can need some unpacking for most folk. Consider that Ansel Adams and others have used whole books, with charts, tables, graphs, and examples, just to say “expose for the shadows, develop for the highlights.” Thus it is — possibly, anyway — with photon noise.

This post is about the root cause of photon noise in digital captures. It ignores Pixel Response Non-Uniformity (PRNU), and read noise, pattern and otherwise. In many photographic situations, photon noise is the main source  of image noise.

Most of you probably know all I’m about to say, but probably some of you don’t. I’m going to try and say it with the minimum of math and technobabble, but I’ve included enough for those who are experts to see some details. If you don’t understand something, just keep reading; you probably don’t have to understand it to get my point.

Imagine that you have a 24 by 36 mm monochromatic sensor.  Let’s say the fill factor is 100%, and there are no micro lenses.  The quantum efficiency of the sensor for D55 light is 50%.  Let’s mount that sensor in a camera, and put a perfect lens on the camera.  This lens is so perfect that there is no diffraction.  Let’s put perfect electronics in our camera that allow us to count every photoelectron with zero read noise.  Now let’s focus our perfect lens to infinity and aim it at a perfect point source of D55 light at a distance that’s its focal length away from the point source.  Thus the light landing on the sensor is perfectly collimated.  Let’s set the lens to f/8.  Now let’s say that the pixel pitch of our sensor is 10 µm.  That means that we have a 2400 by 3600 pixel sensor, or 8.64 megapixels.  Then let’s adjust the intensity of our light source so that 691.2 billion photons per second fall on the sensor.  We’re going to leave the light source at that level for the rest of this thought experiment. Because of the quantum efficiency of the sensor, that means that, on average, each pixel in our camera counts 40,000 electrons per second.

Image A: Let’s set the shutter in our camera to one second, and take a picture.  The average electron count in our picture is 40,000, and the standard deviation as the square root of that, or 200.  The signal-to-noise ratio (SNR) is 40,000 over 200, or 200. The spectrum of the noise is white: all frequencies are equally represented.

Image B: Now let us mentally reconfigure our sensor so did it has the same resolution in pixels: 2400 by 3600, but a pixel pitch of 5 µm.  The physical size of our sensor is now 12 by 18 mm, and, because it’s smaller, the number of photons falling on our sensor in one second is ¼ the number that fell on our larger sensor, or 172.8 billion per second. Thus, on average only 10,000 photoelectrons are produced in each pixel, the standard deviation is the square root of that, or 100.  The signal-to-noise ratio is 10,000 divided by 100, or 100. The spectrum of the noise is white.

Image C: Let’s make a four-second exposure with our small physical size sensor. The average electron count in our picture is 40,000, and the standard deviation is the square root of that, or 200.  The signal-to-noise ratio is 40,000 over 200, or 200. The spectrum of the noise is white: all frequencies are equally represented. There is no way to tell from the statistics or spatial frequency of this image and the first image we made, the one with a one-second exposure and a physically larger sensor. On average, the same number of photons fell on each pixel of both sensors, and that’s all that matters.

Image D: Now let’s reconfigure the 24x36mm sensor so that the pitch is 5 um, (it’s now a 4800×7200 pixel sensor) and make a one-second exposure. On average 10,000 photoelectrons are produced in each pixel, the standard deviation is the square root of that, or 100.  The signal-to-noise ratio is 10,000 divided by 100, or 100. The spectrum of the noise is white. The image looks just like four of Image B set side by side.

Image E: Let’s take image D, and downsample it to 2400×3600 by adding together the electron count of all the odd-numbered (assuming the indices start with one) pixels in each row and column to the values of the pixels to their immediate right, directly below them, and diagonally below and to the right of them. For those skilled in the art of image processing, this amounts to convolving the image with a 2×2 box filter, resampling using nearest neighbor, and trimming the result. The average electron count in our picture is 40,000, and the standard deviation is the square root of that, or 200.  The signal-to-noise ratio is 40,000 over 200, or 200. The spectrum of the noise is white: all frequencies are equally represented. There is no way to tell from the statistics or spatial frequency of this image and Image A. On average, the same number of photons were used to create each pixel of Image A and Image E, and that’s all that matters.

Image F: Let’s downsample Image D to 2400×3600 by nearest neighbor. The image is indistinguishable from Image B, both in statistics and spatial frequency content. Each pixel saw on average one-quarter the number of photons as those in image A, so its SNR is half that of Image A.

Image G: Let’s downsample Image D to 2400×3600 by some other method: bilinear interpolation, bicubic interpolation, Lanczos, or something else. The standard deviation of the resultant image, and thus the SNR, will depend on the algorithm used. The spatial frequency content  of the resultant image, will also depend on the algorithm used.

The constant throughout all this is that the number of photons counted determines the noise and the SNR. Downsizing from a similarly sized and illuminated sensor with a finer pitch can replicate images captured at larger pitches only using a particular downsizing algorithm, one that is usually not used in photography.

Eric Fossum, inventor of the CMOS image sensor,  summed it up succinctly: “The only way to increase SNR in counting things that are described by Poisson statistics is to increase the number of things that are counted. Increasing area at constant flux or increasing time at constant flux are two ways to do that. Increasing the flux for constant time and area also works.”

Aliasing visibility and PS downsampling

I interrupt the series of posts on DOF for this one that proves a point that most of my blog readers already know: in general downsampling can cause aliasing, and in particular the downsampling algorithms in Photoshop do cause aliasing, at least some of the time.

I’ve addressed this issue before, with spectral plots showing the details of what’s going on, but there is a discussion going on on DPR to which this point is germane, some of the participants have limited image-processing skills, and DPR damages posted images to the point that the visual comparisons I want to make are confused.

To make sure that you’re seeing the images properly here, click on each in turn and make sure your browser zooming is set to 100%. Otherwise, your browser will introduce aliasing. If you see aliasing in the first image, then something is wrong.

I started with this 1000×1000 pixel image:

Rings

I downsampled it to 500×500 using several of the Photoshop (Ps) algorithms.

Nearest Neighbor

Nearest Neighbor

Bilinear

Bilinear

Bicubic

Bicubic

Bicubic Sharper

Bicubic Sharper

In all cases aliasing is visible as false patterns. There are false circles, and the rings at the edges go in the wrong direction. It is worst with nearest neighbor. That is expected; nearest neighbor has no lowpass filtering qualities, and there are many spatial frequencies in the original image that are beyond the Nyquist frequency of the downsampled image. By looking at the corners, you can see that nearest neighbor has not blurred the image at all.

That’s not the case with the other three algorithms. Bilinear offers the most detail of those at the edges and corners, but you pay for that with more aliasing. Bicubic is next, and then (surprisingly, considering the name) bicubic sharper.

Depth of field and the web

This is a continuation of a report on new ways to look at depth of field. The series starts here:

A new way to look at depth of field

Many of us use our fancy cameras occasionally to produce low-resolution images for the web. We should have tons of DOF in that case, right? And lens quality shouldn’t matter? And how does that object-field stuff work in that case?

I’ve got answers.

I set up the lens sharpness modeler for a 55mm f/1.4 lens  with the Nikon aberrations focused at infinity, and a sensor pixel pitch of 50um, which gives us a 720×480 pixel image from our modeled full frame camera.

Here’s what we get, in first the image plane and then the object field:

Image Plane

Image Plane

Object Field

Object Field

What’s different from the full-res images?

First off, the resolution, as measured in the image plane as MTF50 cycles per picture height, and in the object field as MTF50 cycles per meter, is much lower. No surprise there.

Next, diffraction and lens aberrations make little difference; in the image plane, everything is just about as sharp at infinity, so we can stop down as much as we’d like with impunity.

Also, as predicted, there’s  ton more DOF, with hyperfocal distances for narrow f-stops dropping to below ten meters. Oh, you expected even more? To be frank, so did I.

But the behavior in the image field that we observed before, where the sharpness starts to fall well before the image plane measures have gotten to the good part, still happens.

With the Otus lens model:

Image Plane

Image Plane

Object Field

Object Field

Pretty much the same thing; you don’t need an expensive lens for the web.

No surprises here, except maybe the way the object field behaves.

If we focus at 10 meters:

 

Image plane

Image plane

The DOf at narrow f-stops is so great that we get hardly any falloff in sharpness at infinity, and improved near-object image plane sharpness.

In the object field:

 

 

Object field

Object field

At f/8, f/11, and f/16, we actually get closer to the flat-with-distance object field sharpness that was predicted by Merkinger than we did with the lens focused at infinity.

 

Photography meets digital computer technology. Photography wins — most of the time.

Entries RSS
Comments RSS