• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / The Last Word / MTF simulation: under the covers

MTF simulation: under the covers

May 23, 2014 JimK 3 Comments

Yesterday I went back and looked at some of the simulation results, and grew suspicious that there were problems with the way the simulated camera was sampling the simulated target. As I dug deeper, I started to question some of my original assumptions, and started experimenting with changes. What I ended up with is close to, but not identical to, where I started. I will modify the posts of the last couple of days to reflect the small changes, but today I’d like to report on what I changed and why.

Warning, this will get detailed and geeky.

First off, let me explain how the simulator works. I start off with a RGB target image of about 12000×8000 pixels. I want to end up with a simulated sampled image that’s much smaller. How much smaller? Good question. Stay tuned.

Most of the simulation operates at the resolution of the target. I know the pixel pitch of the sampled image, and I can calculate the ratio of the size of the target to the size of the sampled image. That allows me to compute kernels to apply to the target in terms of the dimensions of the simulated sensor. The first thing that happens is that I construct a kernel to simulate diffraction, and apply it to the target. There are actually three kernels, one for each color plane, and they are computed assuming red light at 650 nm, green light at 550nm and blue light at 450 nm. Then I apply kernels to simulate lens defects. Then motion blur, if any. Then AA filters, if any, Then the sampling of the sensor – this is where the fill factor comes in. Then I do a point sample of the target image to the resolution of the sensor, assigning color planes to sensels with the RGGB Bayer CFA. Next, I add photon noise if desired. Photon noise is not a useful addition for slanted edge spatial frequency response analysis, since the algorithm averages out noise. Then I digitize at the desired precision; I’ve been using 14 bits for the MTF studies.

Now I have a simulated raw file. I “develop” it with bilinear interpolation demosaicing, and feed the resultant RGB image to Dr. Burns’ sfrmat3 function for the MTF analysis, and either save the results, or compute indirect measures from the MTF curves, like MTF50 and MTF 30 and save those measures.

What could go wrong?

The first thing that made me nervous was the amount of energy above the Nyquist frequency with diffraction-limited lenses at wide apertures. The second was instability in the MTF30 and MTF50 results when I ran series with close spacing between the f-stop or pixel-pitch values. With factors of 1.05 between the data points, I saw saw-tooth ripples in the results. Both of those things made me think that I wasn’t sampling the target right.

Was the target resolution sufficiently greater than the sensor resolution? I recoded the sim so that the ratio of the two resolutions was an explicit input, then I ran series with diffraction-limited lenses and perfect AA-less sensors – the toughest cases – at ratios of 8, 16, 32, and 64. 32 and 64 were substantially identical, but the other two were different. That told me that I needed target resolution of at least 32 times the simulated sensor resolution. That was a surprise.

The next thing to look at was sampling jitter. Before, the ratio between the target and the sensor could be anything. But I could only sample at integer target pixel indices. I thought about coming up with a way to interpolate between target pixels, but in the end, I took a simpler approach: crop the target so that it has dimensions that are integer multiples of the sensor resolution and the target to sensor ratio, which I constrained to an integer power of two. Thus the distance, measured in target pixels, between the sensor samples is fixed. Once I’d eliminated the sampling jitter, the MTF50 instability went away.

By the way, the target is already aliased. Here’s a close-up:

aliasedtgt

I still had a lot of aliasing. Was it real? I took a look at some of the simulated demosaiced images. Here’s one for an f/2.8 diffraction-limited lens on a sensor with 100% fill factor:

slantededgealiasing

Aliasing? I’ll say! Want to see it at 1% fill factor?

aliasedtgt1pct

Now that’s just silly, but I don’t think I can blame the SFR analysis for the aliasing.

So, at the end of all this technological navel gazing, I’m pretty much back where I started. I’ve tweaked the model to be more accurate, but my early results were darned close. At least my level of confidence has improved.

Why don’t other people see this much high-frequency stuff in their simulations? The ones I’m most familiar with are looking at raw channels, not demosaiced images, and that makes all the difference. Why don’t I look at raw channels, too? That’s not what I’m interested in; I want to know about aliasing in demosaiced images. I can’t print raw ones.

The Last Word

← Simulating a Lithium Niobate AA filter MTF50 vs f-stop and pixel pitch →

Comments

  1. Jack Hogan says

    May 26, 2014 at 7:25 am

    Jim, could what you call color aliasing above be simply chromatic aberrations created by the edge not being perfectly aligned in each of the ‘three’ raw channels?

    CA will result in a degraded MTF50 reading but most raw converters are pretty adept at getting rid of it therefore recovering some of that lost resolution.

    Reply
    • Jim says

      May 26, 2014 at 7:30 am

      Jack, the simulated lens has no chromatic aberration. The sensels are perfectly aligned, too. Of course, they’re looking at different parts of the image because of the Bayer array, which is the source of the false color. If they we all looking at the same part of the image, as in a Foveon sensor, there wouldn’t be the same kind of false color.

      Jim

      Reply
  2. Frans van den Bergh says

    December 7, 2014 at 7:06 am

    Hi Jim,

    I am glad you also saw that we need an unexpectedly high oversampling factor to produce accurate simulated images.

    My approach to sampling is render each pixel in the (low res) simulated pixel by generating a very large number of sampling points (roughly corresponding to the pixels in your high-res image, but not aligned to any grid) using an importance-sampling strategy.

    I arrived at this particular solution exactly because of the ridiculously high oversampling I saw with the grid-based approach — I went up all the way to 256x oversampling, and still saw some improvement. Later I decided that the grid-based sampling is inefficient — this appears to be mostly because the diffraction MTFs are infinite, and any grid-based sampling (with diffraction simulated as a FIR filter) truncates the MTF in an undesirable way. I particularly noticed that the diffraction MTF would end up with too much power (contrast in the MTF plot) near the very low frequencies; this is a direct result of using a finite-size FIR filter.

    The importance sampling approach allows me to take the same number of samples (corresponding to FIR filter taps), but to distribute them to balance the low and high frequency accuracy of the diffraction simulations. I have no real justification for this, other than that it allowed me to generate simulated images that produced measured MTF curves that agreed well with the expected analytical MTF functions. I suspect these differences are well below what is clearly visible in a simulated image anyway.

    Just as a side note, the rather pronounced demosaicing false colour you are seeing can be suppressed by the more sophisticated demosaicing algorithms. Of course, they just make a different set of assumptions, and therefore mess up some other type of image feature. I can recommend libraw — they have a good selection of demosaicing methods to choose from.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • JimK on Goldilocks and the three flashes
  • DC Wedding Photographer on Goldilocks and the three flashes
  • Wedding Photographer in DC on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Fujifilm GFX 100S II precision
  • Renjie Zhu on Fujifilm GFX 100S II precision
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.