This is the first in a series of posts about constructing and using a bokeh simulator. If you want to easily navigate to the other parts, at the bottom of this post (below the comments) you’ll find pingback links to each of them.
I’ve been reading a lot of controversy about lens bokeh, and it occurred to me that it wouldn’t be too hard to build a program that simulated the effect of a lens with arbitrary bokeh on a real scene captured in focus. The inputs to the code would be the sharp image, a idealized or real out-of-focus (OOF) blur circle, and directions for scaling the blur circle. Eventually, the blur circle could be scaled with a depth map image, but initially, it would be sufficient to render the entire output image with a uniform blur circle.
The idea would be to allow people to see how real or imagined lenses rendered a wide variety of OOF backgrounds, so that they could select lenses appropriate for their chosen subject matter.
I limbered up my typing fingers today and wrote come Matlab code to implement a first cut at the project. I’ll show you some images, then I’ll discuss the limitations of what I’ve done.
Here’s what I mean by 2-sigma and 1-sigma falloff in the above. In the former case, the outermost edge of the apodizing element is one sigma out from the mean of the distribution in its center. With a 2-sigma falloff, the edge of the apodizer is two sigma away from the mean. 1-sigma falloff is subtly different from the perfect blur circle which was simulated for the images with the word “disk” in their captions. I have corrected for the different exposures required by the simulated apodized lenses.
What are the limitations?
- I can’t simulate accurately blur circles so small that they are close to the diffraction and aberration blurring. The above simulation assumes that all the blurring performed on the putative sharp image is from defocusing.
- I’m assuming that the nature of the OOF blur circle changes only by scaling as the lens is defocused. That appears to be materially the case in my testing of real lenses.
- The simulation is only accurate if the original image is sharp.
Where to do from here?
- I’m going to do some work with real lens OOF point spread functions.
- I’m going to try some other sharp images.
- If anyone has sharp images they’d like me to use, please get in touch with me.
- Eventually, I may get around to depth mapping. It’ll be crude at first.
I am open to suggestions.
CarVac says
Your gaussian-blur-with-1-sigma “falloff” (windowing?) produces noticeable squares, because the farther you get from a true gaussian the less separable it is.
JimK says
I bet that’s because I forgot to apply a circular window. I’ll have a look. Thanks.
JimK says
That was it. Fixed now.
Mike Carper says
Have you tried this http://dofsimulator.net/en/
JimK says
No, thanks. This look like it always uses a disk. Thanks!
Brandon Dube says
I open sourced an aberration simulation module that would be useful here — https://github.com/brandondube/AberrationSwissArmyKnife
Because the sampling is defined in the exit pupil plane instead of the image plane, it would be somewhat a pain to use it to generate PSFs to convolve with an image. It could be modified to be given the PSF-plane sampling and generate the pupil sampling from that, but you risk either missing a portion of the pupil entirely in the case of extreme undersampling, or inadequate 0 padding to avoid the gibbs phenomenon.
The relationship between defocus in the pupil/wavefront and the image plane is:
image_plane_defocus = 8*(F#)^2 * lambda * W020, when W020 is in units of waves (as is the case in my sim tool).
Because the link between the pupil and the image plane units contains a factor of lambda, it is also very messy to do multi-wavelength simulation. Defined/given PSF plane sampling would also be useful in that regard.
These simulations tend to have very good sampling in the PSF plane – roughly nm domain rather than um domain. Because the pixel size in the PSF and image must share common sampling, it is not clear to me how to make them match. I think you are stuck grossly undersampling/oversampling one or the other.
You could add features like apodization, vignetting, etc. They are quite straightforward to do by modifying the XP.
Jerry Fusselman says
Very interesting. I’ll love to see your results.
What this method can’t easily deal with, it seems, is the following:
1. OOF point light sources. In particular, the look when wells are full. This shows perhaps the biggest problem with your assumption that “the nature of the OOF blur circle changes only by scaling as the lens is defocused.” I mentioned this briefly in a reply to you a few hours ago on DPR.
2. Different defocusing at different distances from the camera, which is the sort of thing that can give a feeling of depth.
3. Being able to “see around” OOF objects in the foreground, because objects that would be in the plane of focus are being blurred too, so they won’t stand out.