• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / The Last Word / Raw and film

Raw and film

September 8, 2015 JimK 9 Comments

I wrote a piece on the different worlds inhabited by those who consider raw files to be the reference for camera performance and those for whom the lodestone is a developed file. I posted it here, and put another version on the DPR Sony alpha 7 forum on DPR. In the ensuing discussion, several made the comparison of a raw file to the negative in chemical photography, and the developed file to the print.

It’s an appealing analogy. But the more I thought about it, the more it seemed to break down. Still, analogies are often useful – though sometimes treacherous – and I thought it might be interesting ot spend some time with this one.

In hands-on black and white photography, where the photographer performs every step of the process after the exposure, the negative is the result of more user-controlled processing than the raw file. This is so because the photographer performs the development of the negative, and there are lots of options there. The choice of developer, whether prepackaged or mixed from the constituent chemicals, can have a drastic influence on the relationship of the finished negative to the latent image present after exposure but before development. The time and temperature of the development can have a similarly powerful effect; that’s the basis of the Zone System, and all other expose-for-the-shadows-develop-for-the highlights procedures.

So it’s tempting to say the latent image is the analog to the raw file, and thus the arbiter of what the camera is doing. Not so fast. The choice of film emulsion is not made by the camera manufacturer, but by the photographer, and that choice has a great effect on the negative and the final print. In the digital world, it’s as if the camera came loaded with a particular kind of film, and would accept no other. With that adjustment, then the latent image in black and white photography is comparable to the raw file.

In color photography, with some exceptions, the choice of the film is tied to the development chemistry and the developing process, so you can say that for a fixed camera/film pair, the developed – say, C-41 — negative or – say, E6 – ‘chrome is the analog to the raw file.

In the case of the ‘chrome, the developed film is also the equivalent of the developed raw file; the analog to digital raw development – demosaicing, white balancing, conversion to a CIE color space, deconvolution sharpening, etc. is all collapsed into no processing at all..

In color negative film, the equivalent of digital raw development is the printing process: picking a paper and chemistry, selecting a color pack, dodging, burning, exposing, developing, washing, drying, etc. However, this goes a little farther than the raw development process, since printing is usually considered an extra – and, for some, highly optional – step. In black and white chemical image-making, we can add the development of the negative to that list.

How does all that inform the discussion about how to decide how good a digital camera is at something?

Let’s see.

In the film world, if you want to see how sharp a lens is, you put the sharpest, finest-grain film you can get your hands on in the camera, take pictures of charts, and develop the negative. When it’s dry, you’ve got two choices. You can look at it under a microscope, which is the purest and most accurate way to measure the result. However, that’s not the way it way usually done. Instead, most people put the sharpest lens they had o their enlarger, made prints, and examined them with a loupe. Looking at the negative (or ‘chrome) was sort of the equivalent of looking at the raw file, and looking at the print was more-or-less the analog of looking at a demosaiced file. It’s interesting that in the film era most people looked at the latter. I think the main reason is that most photographers had no other use for a microscope, which was not an inexpensive purchase.

In the old days it was important to know relationship of exposure at any particular point on the film and the tonality in the final print or ‘chrome. We established that by making successive exposures of gray cards at different settings. If you were in a hurry you could take fewer pictures of step wedges. If you were shooting ‘chromes, you looked at the developed positives. That doesn’t help much, since with ‘chromes there is no equivalent of the raw development step. With negatives, you looked at the negatives with a densitometer. Making prints was both unnecessary and a source of error. With color negatives that’s the analog of looking at the raw files. With black and white, as we’ve seen above, it’s like looking at developed raw images with the operation of the raw developer under the complete control of the photographer. You can’t do that with a black-box raw developer like Lightroom and its twin, Adobe Camera Raw. You can’t do it with Capture One. But you can do it with DCRAW and, with careful choice of settings, with Iridient Developer and, I’ve been told, with Raw Therapee. There’s a lesson there. Just as it would be a bad idea to judge negative tone curves by making prints, it’s a similarly iffy situation to make similar judgements about digital cameras with Lightroom.

Especially with ‘chromes, but to a lesser extent with color negative films, colors weren’t very malleable in the film era. Thus it became important to be able to judge the subtle color casts introduced by lens selection. In fact, if you were going to do a slide show with images from several different lenses, it was a good idea to make sure that the cast introduced by all the lenses was similar, so that your audience wasn’t jarred by color changes from slide to slide. If you wanted to test lenses for color rendering, you were well advised to use ‘chromes; the vicissitudes of color printing made judging the changes difficult, and you couldn’t tell anything by looking at orange negatives. In the digital era, I don’t think such color differences among lenses are at all important, but there are those who disagree with me. For those people, the above analogy points in the direction of plain-vanilla raw converters like DCRAW and Raw Therapee for such testing.

I could go on, but I think you get the point. Looking at raw files or all-knobs-visible-and-set-to-zero raw converters is the best way to figure out what your camera is doing. Trying to divine your camera’s characteristics through the variable and unstable lens of a raw developer like Lightroom or Capture One is asking for trouble.

The Last Word

← Sony a7RII menu system AutoPano Giga 4.2 →

Comments

  1. David Braddon-Mitchell says

    September 8, 2015 at 5:16 pm

    HI Jim

    Thanks for that; it really drives me nuts (though I tell myself that there are much more important things to go nuts over) when people attribute things to their camera that are plainly under the control of the default profiles of their RAW programs. I use RPP when I want to do comparisons, which allows level playing field comparisons.

    But what I was wondering was whether you have any opinion about whether there are any differences between the full featured programs that can’t in principle (or perhaps easily) be replicated? People go on about how C1 is so much better than LR, or DXO is so much better than C1 etc etc where by ‘better’ they are making claims about IQ. My hunch – and that’s all it is, backed up desultory attempts to emulate the look of one in another – is that this is all down to defaults, and that you could starting with one copy the look of another (possible exception being NR)

    But I know this is not 100% true — some years a go I used to sometime use a compact, and early LX model, which was fairly noisy and LR gave it a lot of unreversable NR (I think). IN any case RPP gave to my eyes much nicer and sharper, albeit noisier, results than LR, no matter how much sharpening you tried in LR and no matter how much you reduced the NR.

    So how widespread do you think this is? And do you have any view on whether the full featured processors have are mutually emulable?

    cheers
    David

    Reply
    • Jack Hogan says

      September 9, 2015 at 12:55 am

      Hi David,

      My 2 cents are that most current raw converters perform two separate and distinct functions: 1) load the data from a raw file into memory and render it into a standard RGB color space (demosaicing/profile) ready to be saved as a TIFF – this is what DCRAW does; and 2) lots and lots of editing (levels, curves, distortion/CA and other corrections, local contrast, nr, sharpening, etc, etc, etc) – most other ‘featureful’ converters offer some or all of such adjustments and more.

      With very few exceptions step 2) is not performed on the raw data but on the rendered image in memory instead, so it might as well be performed on data off a TIFF file. Therefore one can do this second step in any editor or plug-in of their choice with zero loss of IQ, whether that be the Gimp, PS, Topaz, Nik or whatever.

      As long as converters give you access to the parameters that result in the TIFF in memory of step 1) it’s relatively easy to have all converters look the same (just use the same black and white points, demosaicing algorithm and profiles). But many (LR/ACR in primis) converters don’t give you access to them, purportedly to make things simpler and more automated, so it becomes a little harder to make two converters look the same. Then you start adding in some advanced tweaking (adaptive profiles) and editing in step 2): not all converters will have some features and/or use the same algorithms , so results diverge and become much harder to match from one to the other – and converters develop their ‘look’. Each with their own starting philosophies: most pleasing (C1?), most accurate (DxO?), most easy (LR?)…

      To each their own 🙂

      Jack

      Reply
  2. Jack Hogan says

    September 9, 2015 at 12:10 am

    I hear you, Jim.

    Reply
  3. David Braddon-Mitchell says

    September 10, 2015 at 4:22 am

    Just another quick thought about the analogy: perhaps in the case of B&W the the latent image is the RAW file, and the negative is the default output of the RAW converter…..

    (of course you can just look at the default output of the converter without further editing, whereas it took a lot of skill to ‘read’ a negative….)

    Reply
  4. CarVac says

    September 10, 2015 at 8:42 am

    Did people ever argue what film was better as much as they do now with sensors? Was it just the lack of the Internet?

    I’m too young to know.

    Reply
    • Jim says

      September 10, 2015 at 12:02 pm

      They sure did, and the arguments were sometimes heated. In truth, there were huge differences among film emulsions, much greater than among sensors of similar size and release date. Color films especially had quite different looks, with highly saturated ones like Fuji Velvia transparency film, and films optimized for Caucasian skin tones like Ektacolor Professional Short (the short referred to the optimum shutter speed of flash duration).

      Jim

      Reply
      • CarVac says

        September 11, 2015 at 5:10 am

        Hmm, interesting.

        Sounds to me like arguing over choice of processing styles, as an analogy for digital. Calling things “Bad HDR” and things like that.

        Reply
  5. Chris Livsey says

    September 10, 2015 at 11:40 pm

    Not only the film the developers as well, still do, generate passionate debate. There was a vogue for cross processing so E6 film (reversal transparencies in colour) were processed in C41 (colour negative) and vice versa. Still seen in RAW programs under “effects” or similar.
    Developer mixtures were tweaked continuously to balance sharpness, contrast, speed. Of course some photographers like Cartier-Bresson and Frank fretted endlessly over this, not. Some, Like Adams did.
    plus ça change, plus c’est la même chose

    Reply
  6. David Braddon-Mitchell says

    September 11, 2015 at 9:48 pm

    Yep I went through a phase of photographing more step wedges than actual scenes in the world, and developing them in different ways to maximise DR or increase contrast..(or photgraphing charts and using brews to increase acutance…)

    There was a time when I was nostalgic for all that. Somehow that nostalgia is long gone now; A7rII and an Xrite camera and monitor calibrator, and a nice lab that I can email a file to that I trust and I get back almost exactly what I saw. It’s a different word! And this is at least one respect in which its a better one.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • bob lozano on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Goldilocks and the three flashes
  • DC Wedding Photographer on Goldilocks and the three flashes
  • Wedding Photographer in DC on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Fujifilm GFX 100S II precision
  • Renjie Zhu on Fujifilm GFX 100S II precision
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.