the last word

Photography meets digital computer technology. Photography wins -- most of the time.

  • site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge
You are here: Home / Technical / Output antialiasing

Output antialiasing

January 16, 2011 JimK Leave a Comment

Let’s review conventional sampling theory. We start with a continuous representation (the real world, as imaged by the lens), filter that to remove spectral components above half the sampling frequency, sample at evenly spaced infinitesimal points, digitize the results, and store them. To reconstruct the input we take the samples, recreate their values and locations, pass them through a filter, and obtain a continuous result. The filter in the former sentence is the input antialiasing filter; the filter in the latter sentence is the output antialiasing filter.

In my previous posts, I explained that digital photography as presently practiced does those things quite differently from what the theory says. Our input antialiasing filters are either nonexistent are different from the ones recommended by sampling theory. Or sensors have light-sensitive areas that approach the square of the pitch of the sampling array. And, worst of all, we usually sample different parts of the input spectra at different places in the image.

Similarly, upon output to either a screen or a printer there is no explicit antialiasing filter, unless you count device-dependent sharpening as antialiasing. We’ll come back to that. In inkjet printers, the spread and overlap of the ink droplets on the paper usually produce a continuous image, even when examined under a loupe. With screens, monitors, and dye sublimation printers, we pick viewing positions sufficiently distant that the eye can’t resolve the individual pixels, thus giving the illusion of a continuous output. As is the case with input antialiasing, what we do on output is hardly ideal, but it works fairly well in practice.

It is instructive to consider the form of the optimum interpolation function for converting a sampled image to a continuous one, even if it’s not commonly used in photography. The right function is (math alert!) the normalized sinc function, defined in one dimension as sinc(x) = sin(pi * x) / (pi * x). You can get the two-dimensional sinc function by spinning the one-dimensional one about the origin. What you get is a curve that looks kind of like an Airy disk, with a big haystack in the middle, two pixels wide, surrounded by rings of decreasing amplitude. The curve actually goes negative in the regions from one to two, three to four, five to six, etc. pixel pitches away from the center. Thus the optimum interpolation function has both a low-pass (blurring) characteristic in the central haystack and the positive annuli, and a high-pass (sharpening) quality in the negative annuli.

There are several common interpolation functions that approximate, to a greater or lesser degree, the sinc function. The one effectively used on LCD displays is called the square function, in which the pixels become squares with sides as close to the pixel pitch as possible. It’s not very close, and has lots of high-frequency artifacts (usually referred to as “jaggies”) when you’re close enough to the display to start to make out the pixels. The next step is the triangle function, where the intermediate values between the stored pixels are obtained with linear interpolation. This interpolation is available in Photoshop for image resizing; it’s called “bilinear” in the drop-down menu. Skipping a little-used interpolation function, we come to something called the “cubic B-spline” function, which has a central haystack somewhat broader than the sinc function, goes to zero two pixel pitches away from the center, and never goes negative. This function, or something like it (I can’t tell; Adobe is not real forthcoming in explaining their processing algorithms), is also available in Photoshop, where it’s called “bicubic”. In Photoshop, there are two additional versions of bicubic interpolation other than the vanilla one that’s been there since day one; one apparently tweaked for enlarging and one for reducing the size of images; I have no idea what math is behind them.

Now you see why we have to sharpen our images for best results. The input antialiasing filter, if it’s there, causes blurring. The finite size of the light receptor photosites does the same. None of the conventional reconstruction techniques, whether for printers or displays, have the sharpening associated with the ideal output antialiasing filter. We have to make up for those deficiencies, and, with no scientific tools available to us, we fiddle with unsharp masking until we get close.

Technical

← Diffraction, DOF, and digitization in ideal lenses Resampling →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

March 2023
S M T W T F S
 1234
567891011
12131415161718
19202122232425
262728293031  
« Jan    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • Good 35-70 MF lens
  • How to…
    • Backing up photographic images
    • How to change email providers
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • Mal Paso on Christmas tree light bokeh with the XCD 38V on the X2D
  • Sebastian on More on tilted adapters
  • JimK on On microlens size in the GFX 100 and GFX 50R/S
  • Kyle Krug on On microlens size in the GFX 100 and GFX 50R/S
  • JimK on Hasselblad X2D electronic shutter scan time
  • Jake on Hasselblad X2D electronic shutter scan time
  • Piotr Chylarecki on Who am I?
  • JimK on Who am I?
  • Piotr Chylarecki on Who am I?
  • Stefan on Swebo TC-1 OOBE

Archives

Copyright © 2023 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.