• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / Technical / Output antialiasing

Output antialiasing

January 16, 2011 JimK Leave a Comment

Let’s review conventional sampling theory. We start with a continuous representation (the real world, as imaged by the lens), filter that to remove spectral components above half the sampling frequency, sample at evenly spaced infinitesimal points, digitize the results, and store them. To reconstruct the input we take the samples, recreate their values and locations, pass them through a filter, and obtain a continuous result. The filter in the former sentence is the input antialiasing filter; the filter in the latter sentence is the output antialiasing filter.

In my previous posts, I explained that digital photography as presently practiced does those things quite differently from what the theory says. Our input antialiasing filters are either nonexistent are different from the ones recommended by sampling theory. Or sensors have light-sensitive areas that approach the square of the pitch of the sampling array. And, worst of all, we usually sample different parts of the input spectra at different places in the image.

Similarly, upon output to either a screen or a printer there is no explicit antialiasing filter, unless you count device-dependent sharpening as antialiasing. We’ll come back to that. In inkjet printers, the spread and overlap of the ink droplets on the paper usually produce a continuous image, even when examined under a loupe. With screens, monitors, and dye sublimation printers, we pick viewing positions sufficiently distant that the eye can’t resolve the individual pixels, thus giving the illusion of a continuous output. As is the case with input antialiasing, what we do on output is hardly ideal, but it works fairly well in practice.

It is instructive to consider the form of the optimum interpolation function for converting a sampled image to a continuous one, even if it’s not commonly used in photography. The right function is (math alert!) the normalized sinc function, defined in one dimension as sinc(x) = sin(pi * x) / (pi * x). You can get the two-dimensional sinc function by spinning the one-dimensional one about the origin. What you get is a curve that looks kind of like an Airy disk, with a big haystack in the middle, two pixels wide, surrounded by rings of decreasing amplitude. The curve actually goes negative in the regions from one to two, three to four, five to six, etc. pixel pitches away from the center. Thus the optimum interpolation function has both a low-pass (blurring) characteristic in the central haystack and the positive annuli, and a high-pass (sharpening) quality in the negative annuli.

There are several common interpolation functions that approximate, to a greater or lesser degree, the sinc function. The one effectively used on LCD displays is called the square function, in which the pixels become squares with sides as close to the pixel pitch as possible. It’s not very close, and has lots of high-frequency artifacts (usually referred to as “jaggies”) when you’re close enough to the display to start to make out the pixels. The next step is the triangle function, where the intermediate values between the stored pixels are obtained with linear interpolation. This interpolation is available in Photoshop for image resizing; it’s called “bilinear” in the drop-down menu. Skipping a little-used interpolation function, we come to something called the “cubic B-spline” function, which has a central haystack somewhat broader than the sinc function, goes to zero two pixel pitches away from the center, and never goes negative. This function, or something like it (I can’t tell; Adobe is not real forthcoming in explaining their processing algorithms), is also available in Photoshop, where it’s called “bicubic”. In Photoshop, there are two additional versions of bicubic interpolation other than the vanilla one that’s been there since day one; one apparently tweaked for enlarging and one for reducing the size of images; I have no idea what math is behind them.

Now you see why we have to sharpen our images for best results. The input antialiasing filter, if it’s there, causes blurring. The finite size of the light receptor photosites does the same. None of the conventional reconstruction techniques, whether for printers or displays, have the sharpening associated with the ideal output antialiasing filter. We have to make up for those deficiencies, and, with no scientific tools available to us, we fiddle with unsharp masking until we get close.

Technical

← Diffraction, DOF, and digitization in ideal lenses Resampling →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • Mike MacDonald on Your photograph looks like a painting?
  • Mike MacDonald on Your photograph looks like a painting?
  • bob lozano on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Goldilocks and the three flashes
  • DC Wedding Photographer on Goldilocks and the three flashes
  • Wedding Photographer in DC on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Fujifilm GFX 100S II precision
  • Renjie Zhu on Fujifilm GFX 100S II precision
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.