This is a continuation of a series of posts on the Sony a7RIII. You should be able to find all the posts about that camera in the Category List on the right sidebar, below the Articles widget. There’s a drop-down menu there that you can use to get to all the posts in this series. You can also click on the “a7RIII” link in the “You are here” line at the top of this post.
I’ve reported several times in this series of posts on the a7RIII about the digital spatial filtering that takes place at exposures of 4 seconds and longer. Among astronomical photographers, that is known as “star-eater” processing. I wanted to see what the effect was on an artificial star.
So I bought one.
This one came without planets or thermonuclear reactions. It didn’t give off much heat, either. You can hold it in your hands and it runs off AAA cells.
It has an LED source inside, and a 100-micrometer laser-drilled hole through which the photons emerge. I put a Zeiss 55 mm f/1.4 Otus ZF.2 lens on an a7RIII and set it up about 6 meters away. I figure that a perfect 55 mm lens with no diffraction would give me an image on the sensor of about 1 um. As you’ll see, I didn’t end up with detected “stars” anywhere near that small. The artificial star was too bright, so I put two ND400 neutral density filters over the lens. I set the lens to f/2.8, put the camera and lens on a rail, set the shutter mode to 2-second self-timer, focused (oops, too dark to focus; I took one of the ND filters off and focused, then put it back on), then backed the camera up about 4 inches. Then I made a series of exposures at 3.2 seconds (no star-eating) and 4 seconds (open season on stars), moving the camera forward about an inch after each pair of exposures. I developed the images in Lightroom with the just-released-today version that knows all about a7RIII files. I never found the place on the rail where there was only one pixel excited in the demosaiced images. Maybe, thanks to the Lr demosaicing algorithm, there is no such place.
I’m going to show you four representative images in pairs. First in each pair is the 3.2-second exposure, and second is the 4-second one. These are magnified by a lot, using nearest neighbor so that pixels in the demosaiced image show up as squares in the blowup. This is terrible image-processing methodology but is good if you want to see what’s happening on a pixel level.
There is camera motion between the two images. I’m responsible for that since I touched the camera to change shutter speeds between the two photographs. There is, however, a pattern that will become apparent as I show you more similar pairs: the 4-second image is more diffuse than the 3.2-second one, and the peak values are lower, in spite of the 4-second image receiving a third of a stop more photons.
Another pair:
And another:
And, mercifully, the last:
So, if the stars aren’t really small, maybe the algorithm should be called the star-spreader, not the star-eater.
The color shifts are interesting and show that it is possible that the speculation that the a7RIII spatial filtering favors (is more gentle with) the green layers over the red and blue ones is apropos.
Frans van den Bergh says
Hi Jim,
Maybe you could try to add a collimator to the front of your star source? This might help to keep the source small enough to show up as a sub-pixel point source on the sensor.
I would imagine that simply placing a prime lens in front of the star source would produce sufficient collimation for the purpose of this experiment.
Jack Hogan says
Jim, it would be interesting to run this test on white-balanced raw data in order to avoid all the messy processing. Unless you have better ways to measure it, white balance could be achieved by fitting the quantized PSF to something like an Airy/Gaussian convolved with a squarish pixel.
Brandon Dube says
It’s a very dangerous assumption to make that the lens’ PSF is an airy disk at any aperture. At full aperture it is not diffraction limited, and at smaller apertures you continue to contend with chromatic aberations as well as the nonround shape of the pupil.
Jack Hogan says
Well it looks like Jim got some interesting answers even without white balancing. But if all the information you had was the intensity of a ‘star’ in the raw file, how would you go about approximately equalizing the channels?
JimK says
I don’t think you could do it with one star, but with a whole sky full of them, you probably could. Or you could use a diffuser and take a picture sunlight and use those ratios.
JimK says
Turns out you don’t even need to white balance to see interesting things. I’ll report of that in a post today. Thanks.
Edna says
Every time someone says “quantized point spread function to an Airy/Gaussian convolved squarish pixel” a Republican loses an Alabama senate election.
Matt Anderson says
It reminds me of the Minimum / Maximum filter in photoshop.
Take a monochromatic noise generated patch in photoshop, apply a .8 minimum, ( blue and purple squares ) then apply a .8px maximum, ( purple square ). I’ve used it to clean up noisy night skies as well.
https://imgur.com/a/EXLRc
Hope image shows up, if not, just copy and pasted this.
Maximum filter is a great way to increase star perception for downsizing images for web presentation. But don’t tell anyone this 😉
We use it for post production noise reduction ( minimum filter ) for cleaning up selections, masks, skin, etc…
Erik Kaffehr says
Hi Jim,
Interesting stuff! Would be interesting to see the undemosaiced pixels, though.
What is the artificial star that you were using? I would be interested in having such a device for my own testing.
Kind regards
Erik
JimK says
>Would be interesting to see the undemosaiced pixels, though.
Here you go:
http://blog.kasson.com/a7riii/sony-a7riii-star-spreading-raw-composites/
JimK says
>What is the artificial star that you were using?
https://www.astrozap.com/scripts/prodList.asp?idCategory=67
Andrew Wilson says
Really interesting post! Perhaps also interesting to repeat the experiment using a Leica Summicron on an A7Riii body (Bayer filter) and a Leica M Monochrom body (no Bayer filter)? Not really apples-to-apples since the photosites on the M sensor are larger, but could help isolate which artifacts are lens-induced and which are demosaicing-induced.