• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / Technical / Antialiasing, part 1

Antialiasing, part 1

December 31, 2010 JimK Leave a Comment

The following question came up in a photography mailing list:

“…could you explain the thinking behind the evils of the antialiasing filter on DSLR’s? Why are they needed and what are their advantages/disadvantages and why don’t 2 1/4 digital backs have them, or do they?”

Since there seems to be general interest in this topic, I thought I’d take a crack at it and post it here.

First, my background as it relates to this question. From 1969 to 1989, I worked with one dimensional sampled data systems at Hewlett Packard, Rolm Corp., and IBM. From 1989 until 1995, I worked with digital photographic systems at IBM. With this history, I have to work to keep the math to a minimum. I will do so. I promise: no equations. I will have to talk about the concept of frequency and spectral content. I’ll do my best to relate what I’m saying to concepts understandable by most people. I will be doing a lot of simplification, and if I stray into oversimplification, either inadvertently or deliberately. I apologize in advance.

Before we talk about antialiasing, we need to understand what aliasing is. For that, we go to AT&T in 1924, where a guy named Harry Nyquist came up with a surprising idea, later generalized by the father of information theory, Claude Shannon.

Even though his ideas had broader applicability, Nyquist worked only with signals that varied with time, and I’m going to consider only those kinds of signals for the rest of this post; in the next one, I’ll get around to the two dimensional spatially sampled signals applicable to photography. Nyquist was most interested in the voice signals that AT&T got paid to transmit from place to place. These days, almost everything you hear has been sampled: CDs, wireline and cellphone telephone calls, satellite and HD radio, Internet audio, and iPod music.

The crux of Nyquist’s 1924 insight was that, given an idealized sampling system (perfect amplitude precision, no sampling jitter, infinitely small sampling window, etc.), if you took regularly spaced samples of the signal at a rate faster than twice the highest frequency in the signal, you had enough information to perfectly reconstruct the original signal. This came to be known as the Nyquist Criterion, and although I’ve worked with it for years, I still find it pretty amazing. It turns out that the Nyquist Criterion’s path from theory to practice has been pretty smooth; real systems, which don’t obey all the idealizing assumptions (some of which are pretty severe: pick any frequency you like, and any signal of finite duration has some frequency content above it) come very close to acting like the ideal case.

What if there is significant signal content at frequencies above half the sampling frequency? Let’s imagine a system which samples the input signal 20,000 times a second. We say the sampling frequency is 20 kilo Hertz, or 20 KHz. Let’s further say that this system is connected to a system to reconstruct the signal from the samples. Now let’s put a single frequency in the input, as see what we get at the output. If we put in 5 KHz, we get the same thing out. We turn the dial up towards 10 KHz, and we still get the same signal out as we put in. As we go above 10 KHz, a strange thing happens: the output frequency begins to drop. When we put 11 KHz in, we get a 9 KHz output. 12 KHz give us an output of 8 KHz. This continues to happen all the way to 20 KHz, where we get 0 Hz (dc) signal. 21 KHz in gives us 1 KHz out, 22 KHz in gives us 2 KHz out, etc.

Engineers, trying to relate this situation to everyday life, said that, at over half the sampling frequency, the input signal appears at the output, but “under an alias”. Thus, since in English there is no noun that cannot be verbed, we get aliased signals and aliasing.

Aliasing is almost always a bad thing. If aliasing is present, it’s impossible to tell whether the signals at the output of the reconstructive device were part of the original signal, or were aliased down in frequency from somewhere else in the spectrum. Therefore, in systems that sample, transmit or store, and reproduce time-varying signals in the real world, such as CDs, the audio part of DVDs, or telephone systems, place a filter in front of the sampler to diminish (attenuate is the engineering word) signal content at frequencies above half the sampling frequency. This filter is called an antialiasing (or AA, or, if you’re an engineer, “A-squared”) filter.

Robert Heinlein was fond of pointing out that “there ain’t no such thing as a free lunch.” The antialiasing filter is a good example. Real filters, the ones made up of resistors, capacitors, inductors, and amplifiers, can’t filter out a set of frequencies without affecting the characteristics of those they’re trying to pass unchanged. When Sony introduced the Compact Disc with the slogan “Perfect Sound Forever,” audiophiles raised a chorus of complaints about the sound quality. While the analog to digital conversion in the recording studio (with its fourteen to sixteen bits of resolution and lower accuracy), the digital to analog conversion in the player, and the sampling jitter (both pre analog to digital and post digital to analog conversion) all shared some of the blame, suspicions were focused early on the antialiasing filters, both the ones before the ADCs and the ones after the DACs.  Over time, the analog filters got better, but the biggest improvements had to wait until the advent of oversampling. More on that in the last post in this series, where I will speculate on a possible way out of the current photographic antialiasing morass.

Technical

← Tech hall of shame: Amazon Kindle Antialiasing, part 2 →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • JimK on Goldilocks and the three flashes
  • DC Wedding Photographer on Goldilocks and the three flashes
  • Wedding Photographer in DC on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Fujifilm GFX 100S II precision
  • Renjie Zhu on Fujifilm GFX 100S II precision
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.