Sigma has just started to ship the Quattro dp2, which departs from the original Foveon idea of having three vertically stacked pixels using the wavelength dependent absorption of light in silicon to, with a little point processing, end up with red, green, and blue pixels all captured from the same part of the image. The dp2 has four top-layer pixels over each single second and third layer pixel. Sigma claims that the top layer provides luminance information at four times the areal density of the chromaticity sampling, giving the observer color information in a way appropriate to human vision.
Put me down as a believer in the principle that you need more luminance resolution than chromaticity resolution in photographs, because of the way the human eye responds to spatial frequency variations in luminance is different than the way it responds to those in chromaticity. Here’s a discussion of that point and the implications for photography . Here’s a post with a crude tool that lets you learn a little about your own personal luminance contrast sensitivity function . Here’s a similar display of chromaticity variations .
The late lamented Eastman Kodak company used this technique to good advantage in the PhotoCD; their computation of the luminance plane was done in accordance with accepted colorimetry.
But put me in the skeptical camp with respect to the Quattro. The reason is that the top layer is not sampling just luminance. Here’s a presentation on the Quattro that includes the wavelength response of each of the three layers:
The luminance spectral response is quite heavily biased towards green light. In order for the top layer to sample luminance accurately, it would have to exhibit the same bias. The Quattro top layer response is biased heavily towards blue, and has a peak at a wavelength at which the human luminance response is very low. .The second layer is pretty much sampling luminance, so we are guaranteed luminance contamination in the putative chromaticity channels.
I’ve always looked at Bayer sensors as having effectively about half the quoted number of pixels . But that’s not a problem, per se. In my mind, the big problem with the Bayer CPA is not the sparse sampling of each channel, but the fact that different channels are sampled at different places, giving a particularly noxious kind of aliasing. The original Foveon sensors may have had other problems, but they didn’t have that one.
Even if the top layer actually sampled luminance, unless there were an AA filter with a null at half the chrominance sampling frequency, we would have the opportunity for false color, as the four luminance samples would all have the same chromaticity. However, we wouldn’t have the riot of colors that we get with a Bayer CFA, since all four pixels would be reconstructed with a single chromaticity. Because the top layer is not sensitive strictly to luminance, we have the opportunity for false color, although the effects should be better than with a Bayer sensor, since in the Quattro, we have six photosites producing information used by the raw developer to generate 12 values, while the Bayer CFA demosaicing has to make do with information from four photosites to get the same 12 values.
Here’s my question: why didn’t the Sigma folks make the layer with the quad pixel structure the second layer instead of the first? Then it would come a lot closer to sampling luminance.