A long time ago, I experimented with developing deep infrared images without demosaicing. I used some of my own images with a camera that had an antialiasing filter, and some the I got from Lloyd Chambers using a camera with no AA filter. I found very small improvements in the ones without the AA filter. But I couldn’t show you the images, because they were Lloyd’s. Now that I have the no-AA IR-modified (720 nm) GFX 50R, I can do the same tests and show you the results.
I used the same lens that Lloyd did, the Coastal Optical 60 mm f/4 UV-VIS-IR. I put a B+W 093 830 nm lowpass filter on the lens (the same filter that Lloyd used), and set the f-stop to f/5.6. I focused with peaking — gosh the CO 60/4’s focusing rack is twitchy! — and shot with a 10-second self-timer delay with the camera on a tripod.
I developed the raw file in Matlab in two ways. The first was a demosaicing technique called AHD. It was close to the state of the art ten years ago, but has been surpassed. The second was by balancing the four raw channels and just using the balanced pixels in a composite image. This works because at 830 nm and longer wavelengths, all the channels of the Bayer color filter array in the camera are nearly transparent, and the sensor becomes virtually a monochromatic sensor with slightly different gains for the four channels.
Here is a 300% crop of the AHD-developed image:
And here’s the balanced image.
There’s a little more contrast in the balanced image, and it is maybe a hair sharper. I’ll bet I could sharpen up the AHD image to make the sharpness equivalent, though.
My take is that there’s not much to be gained from this technique that applies to real-world photography.
By extension, you could say the same thing about sensors that are monochromatic in visible light, but I’m sure that would be controversial. Even if you wanted to use these results to make such a statement, it would only apply to monochromatic subject matter, and you’d have to ignore false-color artifacts.