• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / The Last Word / Rolling my own autohalftoning

Rolling my own autohalftoning

March 30, 2014 JimK 3 Comments

I’ve given up on Photoshop for halftone processing of the firehouse images. I can’t see what I’m doing because the software that goes from file resolution to screen resolution has a bias in favor of black over white, so the images look darker than they should when the resolution is set so they fill the screen. If I zoom into 1:1, Photoshop does just fine displaying the pixels as they are in the image, but then I can’t see the overall effect. Lightroom displays the images approximately as they will print, but having to save the file and look at Lightroom every time I wanted to check on the effects of a change in settings was just too cumbersome.

In addition, Photoshop’s tools aren’t well suited to this kind of image manipulation. I couldn’t find a way to precisely move one of the two out-of-register images. I replaced the curves that went on top of the two image layers with a threshold layer, but I couldn’t set the threshold to fractions of an eight-bit value, even though the images were in 16-bit form.

Then I thought about what was numerically going on in Photoshop, and I realized that there was an important processing step that I wanted to control, but Photoshop wasn’t going to let my near it without some contortions.

So I decided to do the whole thing in Matlab.

First off, what am I doing in general? It’s easy to get a handle on the specifics, but I find that I’m better able to see the possibilities if I can clearly state the general case. I call what I’m doing autohalftoning. Halftoning is the process of going from a continuous image to a binary one. There are many ways to do halftoning, the two most popular being screens (once real, now universally simulated), and diffusion dither. In both cases, some high-spatial-frequency signal is added to the image to assist in the conversion of the information from 8- or 16-bit depth (contone) to 1-bit depth (binary). So the “auto” part of my made up word refers to using the high-frequency information in the image itself for dither. I’m not too proud to add noise if I need to, but so far, if I’m careful to stay away from the regions where the camera’s pattern noise is amplified, there seems to be no benefit.

In Photoshop, I offset two versions of the same image from each other by a small number of pixels, took the difference, and thresholded the result to get a binary image at the printer driver’s native resolution, which I then printed.

In Matlab to do an equivalent operation, I convolve the input image with a kernel that looks like this:

5x5offsetkernel

which performs the offsetting and the subtraction simultaneously. To simulate the way Photoshop computes differences, I truncate the negative parts of the image. That’s a pretty crude way to introduce what could be a subtle nonlinearity, so I’ve developed more sophisticate extensions of that basic approach; more on that in another post. Then I threshold the resultant image at a number that relates to the content of the output of the convolution operation, and that’s the binary image.

Here’s what you get with the kernel above:

fh17offsetc0sigmam05offout

It looks dull res’d down to screen size, but good coming out of the printer on a C-sized piece of paper.

The advantages of working in Matlab are

  • Greater freedom in choosing the processing steps
  • Greater control over those steps
  • Perfect repeatability
  • Speed
  • Freedom from arbitrary bit depths; the interim images are all in double-precision floating point, and many of the processing variable are specified the same way.
  • The ability to do automatic ring-arounds to observe the effect of multiple variable changes
  • The ability to have the processing depend on the image content

Details to come.

The Last Word

← Sony a7 EFCS: how much difference does it make? Part 4 Nonlinearities in autohalftoning →

Comments

  1. Royi says

    April 23, 2014 at 5:59 am

    Never heard someone mentions MATLAB and speed in the same sentence :-).

    Reply
    • Jim says

      April 23, 2014 at 8:16 am

      Probably not compared to optimized C++. Probably not compared to Python, either. There are many ways to write slow Matlab code; I usually find those ways when I first write an algorithm. Then I figure out how to change the 20% of the code that’s the bottleneck so that the whole thing runs fast enough. That means getting rid of for loops, if statements, and using Matlab’s tricky matrix indexing features. It also means getting rid of assignments to intermediate variables that I used in debugging to see what’s going on at each stage. All that makes the code less readable, at least to me. However, I have to admit that I’m starting to get used to it. A bit.

      Matlab is quite good at self-parallelizing and using lots of cores if you don’t use constructs that force it into single-threading.

      I guess fast is relative. For the things that I do in Matlab that have Photoshop sequences that perform the same algorithm, I find that Matlab is generally faster than Photoshop actions.

      And if all else fails, there’s always the well-timed coffee break.

      Jim

      Reply
    • Jim says

      April 24, 2014 at 9:52 am

      There’s another aspect of Matlab speed; the program’s rapacious appetite for memory. I run it mostly on a machine with 256 GB of RAM installed, and 192 GB addressable by Win 7. I’ve never seen it start to swap on that machine, but I occasionally did on a 48 GB machine I used to run it on, and when swapping it slowed to such a slow crawl that I always hit -C.

      Jim

      Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • JimK on Goldilocks and the three flashes
  • DC Wedding Photographer on Goldilocks and the three flashes
  • Wedding Photographer in DC on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Fujifilm GFX 100S II precision
  • Renjie Zhu on Fujifilm GFX 100S II precision
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.