This is a continuation of a series of posts on the Sony a7RIII. You should be able to find all the posts about that camera in the Category List on the right sidebar, below the Articles widget. There’s a drop-down menu there that you can use to get to all the posts in this series. You can also click on the “a7RIII” link in the “You are here” line at the top of this post.
I’ve reported several times in this series of posts on the a7RIII about the digital spatial filtering that takes place at exposures of 4 seconds and longer. Among astronomical photographers, that is known as “star-eater” processing. I wanted to see what the effect was on an artificial star.
So I bought one.
This one came without planets or thermonuclear reactions. It didn’t give off much heat, either. You can hold it in your hands and it runs off AAA cells.
It has an LED source inside, and a 100-micrometer laser-drilled hole through which the photons emerge. I put a Zeiss 55 mm f/1.4 Otus ZF.2 lens on an a7RIII and set it up about 6 meters away. I figure that a perfect 55 mm lens with no diffraction would give me an image on the sensor of about 1 um. As you’ll see, I didn’t end up with detected “stars” anywhere near that small. The artificial star was too bright, so I put two ND400 neutral density filters over the lens. I set the lens to f/2.8, put the camera and lens on a rail, set the shutter mode to 2-second self-timer, focused (oops, too dark to focus; I took one of the ND filters off and focused, then put it back on), then backed the camera up about 4 inches. Then I made a series of exposures at 3.2 seconds (no star-eating) and 4 seconds (open season on stars), moving the camera forward about an inch after each pair of exposures. I developed the images in Lightroom with the just-released-today version that knows all about a7RIII files. I never found the place on the rail where there was only one pixel excited in the demosaiced images. Maybe, thanks to the Lr demosaicing algorithm, there is no such place.
I’m going to show you four representative images in pairs. First in each pair is the 3.2-second exposure, and second is the 4-second one. These are magnified by a lot, using nearest neighbor so that pixels in the demosaiced image show up as squares in the blowup. This is terrible image-processing methodology but is good if you want to see what’s happening on a pixel level.
There is camera motion between the two images. I’m responsible for that since I touched the camera to change shutter speeds between the two photographs. There is, however, a pattern that will become apparent as I show you more similar pairs: the 4-second image is more diffuse than the 3.2-second one, and the peak values are lower, in spite of the 4-second image receiving a third of a stop more photons.
And, mercifully, the last:
So, if the stars aren’t really small, maybe the algorithm should be called the star-spreader, not the star-eater.
The color shifts are interesting and show that it is possible that the speculation that the a7RIII spatial filtering favors (is more gentle with) the green layers over the red and blue ones is apropos.