A couple of weeks ago, I presented the results of a lens simulation that I wrote. That simulator ignored phase effects in the frequency domain. I have created a successor sim that used Fourier optics, and is much more accurate in presenting the results of combinations of aberrations. But this simulator has its own set of limitations, and I want to be — ahem — transparent about them.
- The simulation is fundamentally monochromatic for each color plane. It computes PSFs for red, green, and blue channels independently, using 31 wavelengths from 400 nm through 700 nm in steps of 10 nm, but it does not model true broadband chromatic behavior or spectral dispersion through optical materials. The system assumes a paraxial geometry and does not perform full ray tracing through thick, multi-element optics. As a result, it is best suited for moderate aberrations where wavefront error can be meaningfully expressed using a Zernike basis.
- Aberrations are modeled as static phase shifts across a uniform, circular pupil. This approach correctly captures diffraction in the scalar approximation by applying a Fourier transform to the complex pupil function. However, it assumes the aperture is perfectly circular and uniformly illuminated. Effects due to aperture blade shape, central obstructions, vignetting, or falloff toward the field edge are not included. Nor are effects from polarization, coherence, or vector diffraction.
- PSFs are computed on a discrete grid of field points. Each tile of the image receives a different PSF depending on its position, but the PSF is assumed to be spatially invariant within the tile. This tiling approximates field dependence, but cannot capture continuous PSF variation or higher-order interactions between aberrations and field angle. The system also assumes shift-invariance within each tile and does not model isoplanatic breakdown or spatially variant convolution kernels.
- Although defocus is parameterized in microns of image-plane shift and scaled appropriately using the f-number and wavelength, the simulation does not directly derive defocus or depth-of-field behavior from lens geometry or object distance. Similarly, while lateral and longitudinal chromatic aberrations are modeled, they are applied as simple shifts and focal displacements rather than emerging from physical ray paths through refractive elements.
- Sensor-level effects such as pixel shape, sampling, color filter arrays, noise, and readout behavior are also excluded. The simulation operates entirely in linear light, using upsampled convolution to apply PSFs and downsampling afterward. While this enables good spatial resolution and visual realism, it does not account for demosaicing, sensor integration, or tone mapping. Likewise, artifacts like flare, ghosting, and scattering are not modeled.
- Sensor sampling window functions are not modeled, which causes the next three issues.
- Aliasing is misrepresented: I’m showing band-limited PSFs, but not how they would interact with discrete sampling. Real sensors perform spatial averaging over each pixel, attenuating high-frequency content more than a delta-function sample would.
- Contrast is overestimated: Especially for fine PSF structure near the cutoff frequency, contrast appears higher than it would be in a real captured image.
- Chromatic aliasing is unfiltered: Lateral chromatic aberration or diffraction-induced color fringes are shown in full detail, but in practice, the boxcar convolution from sensor pixels would blur them.
- Overall, this simulation provides a physically plausible and computationally efficient way to explore the spatial effects of various low- to mid-order aberrations. It models diffraction through the Fourier transform of a complex pupil, and incorporates field-dependent phase errors expressed through Zernike polynomials. However, it stops short of full wave-optical or ray-traced modeling, and omits many of the nonlinear, time-varying, or sensor-specific phenomena that arise in real optical systems.
Leave a Reply