• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / Technical / Antialiasing, part 4: the future

Antialiasing, part 4: the future

January 4, 2011 JimK 1 Comment

Antialiasing the future
It’s pretty clear to me that the biggest aliasing problems today are caused by the Bayer pattern and similar methods that construct a color sensor by detecting different spectra at different places on the chip. One way to make a big improvement would be to get all the RGB photosensitive regions that make up a pixel looking at the same place on the lens-created image.
There is currently a sensor that does that very thing: the Foveon sensor, available in some Sigma cameras. It works by stacking the red, green and blue photodetectors on top of each other. There are some disadvantages in color accuracy over a Bayer array, and the chips appear to be noisier than the best conventional CMOS chips. There is some controversy on how to compare pixel counts on Foveon versus conventional sensors. I have previously argued here that the pixel count on Bayer-array sensors should be divided by two to get real RGB pixels. The Foveon marketers have been caught up in the same silly numbers race as the Bayer guys, and are arguing that their pixel count should be multiplied by three. I’d ignore that, and just count the real pixels. Viewed that way, the current highest resolution Foveon chip, at about 5 megapixels, is close in image information to the Nikon D3 and D3s, whose pixel counts correct to 6 megapixels. There is no full frame Foveon chip, nor is there a medium format chip.
Other ways to accomplish the same goal with today’s technology:
Three monochromatic chips with prisms and filters to separate the colors, similarly to many video cameras. Not likely for high end still photography because of cost and size impacts.
One monochromatic chip with a filter wheel. This was a commercial alternative in the late 80s and early 90s, but seems to have fallen from favor. Only works with motionless subjects.
Micro-movement of the sensor, making several separate exposures at each position so that each of a constellation of Bayer elements sees on average the same light. This was done by Imacon (now Hasselblad) years ago. It, too, only works on non-moving subjects.
You can’t buy it yet, but for four years, Nikon has had a patent on a photodetector system that collects three color photosites under one microlens.
In the future, who knows? My money is on the technology that made digital audio work so well: oversampling. The basic idea is to sample the input data at a higher frequency than you’ll use for the output, then use digital filtering to remove unwanted high-frequency signals before converting the samples to the lower output rate. It provides an improvement because the digital filters can be made to do things impractical in the analog world.  The analog antialiasing filters used in audio are a lot more precise and controllable than the ones used in photography, so oversampling should provide even bigger gains.
How might this work? Since we’re dreaming, let’s imagine a sensor that has a 4×4 array of photosites – half green, one-quarter red and blue – for each output pixel. We’ll put the centers of the 4×4 arrays 8 micrometers apart, so that we could make a full frame 35mm sized chip with 13.5 million real, three-color pixels (a little better resolution then the D2x) , or a 4.5x6cm sensor with over 42 megapixels. Since the actual sensors are on a pitch of two micrometers (Three times the wavelength of red light!), even with a 1.4 to 2 multiplier for the Bayer pattern, we can count on the lens itself being an adequate antialiasing filter: it will have to resolve better than between 250 and 360 lp/mm to create an alias. If you just average all the green pixels to one value, and did the same for the reds and the blues, you’ll have the equivalent of a huge Foveon chip. But you could do better than that. By applying some smart filtering algorithms, you could trade off low noise (which you get by using lots of microsites in the calculation) and high resolution (which you get by using fewer. It’s just a dream at this point; such a chip would have more than 200 million photosites in 35mm form, and more than 600 million in the medium format size.
Oversampling could offer an advantage in chip yield as well. Dead photosites could be identified as such during testing, and the chip programmed to ignore them. Assuming one or fewer failures per 4×4 array, there are at least three other good samples per color per output point. All that would be added would be a bit of noise, and all that would be lost would be a bit of resolution.
Can we get photosites that small? The Canon S70 already has a pixel pitch of 2.3 micrometers, so that part ought to work. The tricky part would be keeping the fill factor high as the pixels get smaller. If that’s not possible, we’d lose low-light performance and dynamic range.
Since the sensor can now detect whether there’s high frequency information in the image or not (assuming it can sort out the signal from the noise), it could do tricky things like, in the absence of high frequency detail,  recruiting adjacent 4×4 arrays, or parts of them, and averaging in their contents to lower noise. That boils down to having adaptive pixel size: big where there’s not much detail, and small where there is, which matches really well with the eye’s ability to see noise in smooth areas, but not so easily where there’s detail.
It’s a fair question to ask if it makes sense to do all this processing on the chip, or simply upload a 400MB raw file for the 35mm size sensor or a 1.2GB file for the medium format one. It ends up depending on how fast it is to do the processing on the chip versus the time spent transmitting those big files.
Producing this series of posts has been fun and instructive for me, since collecting and writing down my thoughts has refined them. It’s also made me think about another topic: antialiasing in reconstruction. More on that later.


It’s pretty clear to me that the biggest aliasing problems today are caused by the Bayer pattern and similar methods that construct a color sensor by detecting different spectra at different places on the chip. One way to make a big improvement would be to get all the RGB photosensitive regions that make up a pixel looking at the same place on the lens-created image.

There is currently a sensor that does that very thing: the Foveon sensor, available in some Sigma cameras. It works by stacking the red, green and blue photodetectors on top of each other. There are some disadvantages in color accuracy over a Bayer array, and the chips appear to be noisier than the best conventional CMOS chips. There is some controversy on how to compare pixel counts on Foveon versus conventional sensors. I have previously argued here that the pixel count on Bayer-array sensors should be divided by two to get real RGB pixels. The Foveon marketers have been caught up in the same silly numbers race as the Bayer guys, and are arguing that their pixel count should be multiplied by three. I’d ignore that, and just count the real pixels. Viewed that way, the current highest resolution Foveon chip, at about 5 megapixels, is close in image information to the Nikon D3 and D3s, whose pixel counts correct to 6 megapixels. There is no full frame Foveon chip, nor is there a medium format chip.

Other ways to accomplish the same goal with today’s technology:

  • Three monochromatic chips with prisms and filters to separate the colors, similarly to many video cameras. Not likely for high end still photography because of cost and size impacts.
  • One monochromatic chip with a filter wheel. This was a commercial alternative in the late 80s and early 90s, but seems to have fallen from favor. Only works with motionless subjects.
  • Micro-movement of the sensor, making several separate exposures at each position so that each of a constellation of Bayer elements sees on average the same light. This was done by Imacon (now Hasselblad) years ago. It, too, only works on non-moving subjects.
  • You can’t buy it yet, but for four years, Nikon has had a patent on a photodetector system that collects three color photosites under one microlens.

In the future, who knows? My money is on the technology that made digital audio work so well: oversampling. The basic idea is to sample the input data at a higher frequency than you’ll use for the output, then use digital filtering to remove unwanted high-frequency signals before converting the samples to the lower output rate. It provides an improvement because the digital filters can be made to do things impractical in the analog world.  The analog antialiasing filters used in audio are a lot more precise and controllable than the ones used in photography, so oversampling should provide even bigger gains.

How might this work? Since we’re dreaming, let’s imagine a sensor that has a 4×4 array of photosites – half green, one-quarter red and blue – for each output pixel. We’ll put the centers of the 4×4 arrays 8 micrometers apart, so that we could make a full frame 35mm sized chip with 13.5 million real, three-color pixels (a little better resolution then the D2x) , or a 4.5x6cm sensor with over 42 megapixels. Since the actual sensors are on a pitch of two micrometers (Three times the wavelength of red light!), even with a 1.4 to 2 multiplier for the Bayer pattern, we can count on the lens itself being an adequate antialiasing filter: it will have to resolve better than between 250 and 360 lp/mm to create an alias. If you just average all the green pixels to one value, and did the same for the reds and the blues, you’ll have the equivalent of a huge Foveon chip. But you could do better than that. By applying some smart filtering algorithms, you could trade off low noise (which you get by using lots of microsites in the calculation) and high resolution (which you get by using fewer. It’s just a dream at this point; such a chip would have more than 200 million photosites in 35mm form, and more than 600 million in the medium format size.

Oversampling could offer an advantage in chip yield as well. Dead photosites could be identified as such during testing, and the chip programmed to ignore them. Assuming one or fewer failures per 4×4 array, there are at least three other good samples per color per output point. All that would be added would be a bit of noise, and all that would be lost would be a bit of resolution.

Can we get photosites that small? The Canon S70 already has a pixel pitch of 2.3 micrometers, so that part ought to work. The tricky part would be keeping the fill factor high as the pixels get smaller. If that’s not possible, we’d lose low-light performance and dynamic range.

Since the sensor can now detect whether there’s high frequency information in the image or not (assuming it can sort out the signal from the noise), it could do tricky things like, in the absence of high frequency detail,  recruiting adjacent 4×4 arrays, or parts of them, and averaging in their contents to lower noise. That boils down to having adaptive pixel size: big where there’s not much detail, and small where there is, which matches really well with the eye’s ability to see noise in smooth areas, but not so easily where there’s detail.

It’s a fair question to ask if it makes sense to do all this processing on the chip, or simply upload a 400MB raw file for the 35mm size sensor or a 1.2GB file for the medium format one. It ends up depending on how fast it is to do the processing on the chip versus the time spent transmitting those big files.

Producing this series of posts has been instructive for me, since collecting and writing down my thoughts has refined them. It’s also made me think about another topic: antialiasing in reconstruction. More on that later.

Technical, The Bleeding Edge

← Antialiasing, part 3 Antialiasing, part 4 →

Comments

  1. chester says

    January 5, 2011 at 7:42 pm

    last fall, sigma announced the SD1 due out this spring(?) which is supposed to have 15.4 million real pixels on a 24 x 16mm (1.5x APS-C) sensor. the predecessor (SD15) had 4.7MP on a smaller 20.7 X 13.8mm (1.7x) sensor. it’ll be interesting to see what this sensor can do with their lenses.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • JimK on Goldilocks and the three flashes
  • DC Wedding Photographer on Goldilocks and the three flashes
  • Wedding Photographer in DC on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Fujifilm GFX 100S II precision
  • Renjie Zhu on Fujifilm GFX 100S II precision
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.