Segregating Internet discussion

I once made a financial presentation to a nonprofit board. Afterwards, someone congratulated me on understanding the financial position of the institution so well. I told her that I hoped that now she did, too. She said, “Oh, no, I’m not a numbers person.” I told her that I understood that it was difficult to look at dense columns of numbers and gain meaning, and that’s why I presented most of the material as graphs. She smiled at me, and said “I don’t do graphs, either.” There was no sense that she regarded her rejection of a quantitative understanding of the world as any kind of impediment at all. In fact, there seem to be a dismissive attitude toward numbers and the people who employed them.

I’ve encountered this attitude often, although not usually in such striking form. I’ve never seen its opposite: an embracing of numeric analysis and rejection of qualitative understanding. No one has ever said to me, “I’m just not a words person.”

I know that not everyone is equally comfortable and competent in both worlds. Engineers, for example, are by reputation — and sometimes in actuality – deficient in their ability to write and speak clearly and persuasively. When I was running engineering at Rolm, I noticed that internal memos and reports were not as clear or pleasant to read as they could have been, so I started to hand out copies of Strunk and White to all newly-promoted engineers. No one ever said, “We don’t need no stinking style book.” Maybe they were just too polite. I never encountered an engineer who was proud of having only rudimentary language skills.

What’s all this got to do with photography? The same attitudes are in play there. Photography is a technical pursuit, and, for many, it’s an artistic activity. Some people get carried away with their love of the technology, and produce mundane art, or, occasionally, no art at all. Some reject the technology even as they employ it, and sometimes their art suffers for that dismissal. This is not a new phenomenon; it was with us in the film era. There were photographers who eschewed light meters, uncapping the lens until they felt enough light enter the camera. There were people who said that Ansel Adams’ Zone system was unnecessarily technical, useless nonsense; all a photographer had to do was expose for the shadows and develop for the highlights, and that any photographer worth his salt could do that without resorting to numbers.

On the ‘net’s photographic fora people have occasionally complained that postings with what they considered to be overly technical content have ruined their enjoyment. Rather than just pass by the posts that they consider uninteresting, they protest their existence. Just as in the larger world, no one seems to do the opposite.

Now, over on Lula, we have the next step being proposed: banning technical discussion from several sub-fora. Aside from its unworkability – is referring to the focal length or f-stop of a lens being technical? – I have serious problems with this. I think people should honor all approaches to photography, and that people who favor one or the other should at least have to opportunity to see what someone with another orientation thinks.

We may not achieve universal enlightenment, join hands, dance around the computer display and sing Kumbaya, but maybe some understanding, appreciation, and tolerance will seep into the mix. Oh, wait. That’s crazy talk. This is the Internet.

And the juror picked…

…this one:

Betterlight_00071

That was a surprise to me. Although visually striking, it has the second least intellectual content of the six, and, for me, bears repeated viewings only fairly well. I like it, of course, or I wouldn’t have submitted it, but it’s not my favorite. However, there’s one element that I just love about this image: the way that the afterglow of the sunset near the horizon with time as the horizontal dimension mimics the way the effect looks in two space dimensions. The combination of that with the rest of the image, which is so aggressively unreal looking, makes the picture for me.

I am once more reminded of how subjective the image judging biz is. There might be another lesson here. When you’re judging thousands of images, you’ve got to go with your first impression, and this is an image that makes a big first impression.

In the back of my mind is the fear that I’m the only one who is fascinated enough by these slit scans that I can stare at an image from this series for a long time, sorting out the wierdnesses.

A portfolio exhibition

The Center for Photographic Art has an annual juried exhibition. This year, it is a portfolio event. Not all the exhibited work was to be portfolios — that privilege was reserved for a favored few — but the judging was on the basis of submitted portfolios of eight images.

I submitted the following images from the Timescapes (slit-scan) series:

Betterlight_00036-Edit

Betterlight_00041-Matlab6cr

Betterlight_00048-Edit-3

Betterlight_00071

Betterlight_00189

Betterlight_0304 (2)

lum flat corrected 8 a

Outside test_33-Edit

The juror picked one image for the show. Can you guess which one?

I’ll tell you tomorrow.

 

Read noise patterning — summary

I’m wrapping up my work on spatial frequency analysis of read noise, and in this post, I’ll summarize what I’ve found and provide links to posts with the details.

In all the cameras that I tested:

The spatial spectrum of the noise when the camera is exposed to a dark field is not white, but has higher energy at lower spatial frequencies than white noise. In this regard, the dark-field noise is different from photon noise (aka shot noise, aka Poisson noise) and photoreceptor non-uniformity (PRNU).

In this post, and in many of the ones that preceded it, I refer to dark-field noise and read noise interchangeably. Though I wish it were the case, this is not strictly accurate. In some cameras, the black point is subtracted from the real raw readings before the “raw” file is written. This process removes half the read noise, chopping all counts below the black point off and assigning them the value of the black point. As it turns out, aside from making the dark field images look twice as good as they really should, this practice has remarkably little effect on the spatial frequency characteristics of dark-field images.

The low-frequency component of the read noise is more visually objectionable than white noise. In fact, with most cameras it’s downright ugly. If you click on the links to each camera above, the next page shows images of that camera’s read noise after low-pass filters of various shapes and sizes.

In the cameras I tested, the low frequency content of the read noise is worse from top to bottom than side to side, or vice-versa. This seems to depend on the design of the sensor, and gives rise to the image defect that photographers call banding.

Fortunately, in the cameras that I tested, the level of the low-frequency part of the read noise is far enough down that you don’t see it in normal images. In the case of the Nikon D810, it’s hard to see it even if you go looking for it.

In the Nikon D810, almost all the low-frequency read noise is the same from frame to frame, and can be subtracted out in post processing. In the case of the Sony alpha 7S, that is unfortunately not true.

Although read noise does change somewhat with shutter speeds, at least in the case of the Nikon D810, it hardly changes at all in the hand-holding range of 1/3- second to 1/8000 second. Even throughout the range of 1/30 second to 30 seconds, the amount of read noise doesn’t change much, and the frequency characteristics change hardly at all in the D810.

 

 

D810 read noise characteristics vs shutter speed, long exposures

A few posts ago, I did a series of D810 dark-field exposures with different shutter speeds to make sure that one of my test methods, which involved varying exposure by varying shutter speed, wasn’t affecting the quality of the read noise. It wasn’t.

Today I did a similar series at long shutter speeds to find out if the integration of the leakage current from the photodiodes over longer periods of time changes the frequency characteristics of the read noise. I used ISO 1000, which is the highest D810 ISO with minimal histogram clipping and no digital gain (except for white balance prescaling), and shutter speeds from 1/30 second to 30 seconds. Long exposure noise reduction and high-ISO noise reduction were turned off. I used the whole frame — and all four channels — for the calculations.

The spectral characteristics of the read noise don’t change.

The plots:

d810leH

d810leV

They’re not as tight as the 1/30 – 1/8000 second plots, but they’re pretty close. You can see in this post that, starting at 1/4 second, Nikon applies some processing to the curve that lowers the dark-field noise, even if you tell it not to. After that, as the exposures get longer, the read noise gets worse, as you’d expect. However, the basic shape of the curves doesn’t change.

 

Pattern Error in Sony a7S dark-field images

A couple of posts ago I reported getting rid of almost all the Nikon 810 low-frequency dark-field noise by averaging many (256) dark-field images and subtracting the average from individual images.

I wondered if the same trick would work with the Sony alpha 7S. Sadly, the answer is no.

Here’s the standard deviation of the average of 7S dark-field images as they are added into the mix one by one:

First with the raw values as the vertical axis:

a7siso100avgc

And with the vertical axis converted to electrons:

a7siso100avg

Either way, it doesn’t look very promising. The curves are almost straight lines on the log-log plot, with vertical values halving every time the horizontal values are quadrupled, indicating that almost all the dark-field noise is different from exposure to exposure.

I tried testing at ISO 3200:

a7siso3200avgc

a7siso3200avge

That’s a little more promising. If we look at the histogram of the average of 256 exposures, we can see what’s going on:

Avgda7s3200histo

There are two Gaussian distributions, one narrow one (It’s only narrow because it’s the average of 256 exposures) and one wider one with slightly lower mean and much lower population. The latter one is the fixed error.

But subtracting our the averaged image makes little difference to the low-frequency energy:

Without subtraction:

a7s3200

With subtraction:

a7s3200subref

You win some, you lose some…

D810 dark-field pattern error images

In the preceding post, I presented data about the D810 fixed pattern read errors and the results of using averaged dark-field images to correct the fixed part of the read errors. We found that almost all of the low-frequency componts of the read errors were fixed — they didn’t change from exposure to exposure.

Now I’lll show you some sample images. As usual in this read noise analysis series, they have been scaled into the range [0,1],  have had a gamma curve of 2.2 applied , been res’d down to 640×480, and JPEG’d.

A sample dark-field image:

Uncorrected Image

Uncorrected Image

The reason it’s so dark is that hot pixels control the scaling.

The 256-exposure averaged image:

Averaged Image

Averaged Image

It’s at least as dark, because the hot pixels are part of the fixed pattern.

The result of subtracting the averaged image from the uncorrected image:

Corrected Image

Corrected Image

Can’t see much, can you? Lets do some low pass filtering, first with a 36-pixel square kernel.

The dark-field image:

Uncorrected Image, square kernel, 36 pixels

Uncorrected Image, square kernel, 36 pixels

The averaged image:

Averaged Image, square kernel, 36 pixels

Averaged Image, square kernel, 36 pixels

The corrected image:

Corrected Image, square kernel, 36 pixels

Corrected Image, square kernel, 36 pixels

Now with a 216-pixel square kernel.

The dark-field image:

Uncorrected Image, square kernel, 216 pixels

Uncorrected Image, square kernel, 216 pixels

The averaged image:

Averaged Image, square kernel, 216 pixels

Averaged Image, square kernel, 216 pixels

The corrected image

Corrected Image, square kernel, 216 pixels

Corrected Image, square kernel, 216 pixels

:When you look at these images, don’t judge the overall noise level; it’s all been normalized so that the range on all the images is the range of the error. Look at how pleasing or ugly the patterns are.

 

 

 

 

Pattern error in D810 dark-field images

I’ve been analyzing the low-frequency behavior of read noise in several cameras for the last two weeks. Now I turn my attention to how much of the dark-field images vary from exposure to exposure, and how much form a fixed pattern. In addition, I will explore the differences in the spatial spectra of the fixed and variable parts of the dark-field image.

The camera I’ve chosen for my first set of experiments is the Nikon D810, I started by making a series of dark-field exposures at ISO 1000 and 1/8000 second. I chose ISO 1000 because that is the ISO where the D810 just starts to clip the left side of the dark-field histogram. It is also the highest ISO on the camera that has no digital gain applied.

I made 256 exposures, and averaged the raw images (all four channels), recording the standard deviation of the averaged image after each exposure was averaged in:

D810averaging

You can see that the curve flattens out after about 128 images in the average, which means that there’s a portion of the dark-field image that doesn’t vary from frame to frame.

Want to see it in electrons? Sure thing:

 

D810averagingE

The electron count of the curve’s intercept with the left axis may be bigger than you’re used to seeing. That’s because I took the standard deviation of the entire frame, not of a small crop.

Then I took one of the dark-field images and measured the way that the standard deviation varied with averaging kernels of three shapes (one dimensional horizontal, one dimensional vertical, and square) and many sizes:

d810rnlpnosub

You can see that the curves flatten out, indicating that there the spatial frequency content of the dark-field image is not flat, or white, but that there is more low frequency content than would be in an image with a flat frequency spectrum.

I performed the same set of calculations after subtracting the average of 256 frames from the dark-field image:

d810rnlpsub

Now there is very little flattening of the curves — although there is some with the largest vertical kernels — indicating that the corrected image has very little additional low-frequency content over that of an image whose noise is white.

We can get another angle on it by processing the averaged image:

d810rnlp256avg

Yes, indeed. That’s where the low frequency content is. Middle frequency, too, look how the curves start to flatten for even very small kernels.

It looks like almost all the low-frequency “read noise” of the D810 can be eliminated with the subtraction of a reference image.  You deep-sky photographers might want to take note of that.

 

D810 read noise characteristics vs shutter speed

There’s an assumption buried in the protocol for the experiment that led to yesterday’s post: that the read noise of the D810 doesn’t change character over the range of shutter speeds employed in the testing, which were from 1/60 (used to get the histogram that let us estimate the electron counts), and 1/8000 (the highest speed used). Unverified assumptions being A Bad Thing, I thought I’d run a test.

I set the ISO of a D810 to 3200, and made a series of dark-field exposures at shutter speeds from 1/15 through 1/8000, then subjected them to the same processing I earlier used to assess the read noise vs ISO characteristics of the camera.

The results:

D810RBHSS

D810RBVSS

Sensor referred:

D810RBHSSsensor

 

D810RBVSSsensor

As a ratio to ideal behavior:

D810RBratioHSS

D810RBratioVSS

There’s a little spread at the low-frequency, large kernel end of the horizontal averaging, but things look pretty consistent.

Just as a check, there’s the 1/8000 image with a square averaging kernel of 36 pixels:

D810ISO3200-8000-36

 

And the equivalent 1/60 second exposure:

D810ISO3200-60-36

 

Well, that’s one less thing to worry about.