• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / The Last Word / The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras

The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras

May 6, 2025 JimK 20 Comments

I’ve written on this subject before, but I’ve not done a piece that deals with the common counterarguments. Here is one.

The Fujifilm GFX 100-series and Hasselblad X2D cameras  support 16-bit RAW files. At first glance, this seems like an obvious win: more bits should mean more data, more dynamic range, and more flexibility in post-processing. But in practice, the benefits of 16-bit precision over 14-bit are negligible for photographic applications. Here are the arguments often made in favor of 16-bit capture and why they don’t hold up under scrutiny.

1. Myth: 16-Bit Provides More Dynamic Range A 16-bit file can, in theory, encode 96 dB of dynamic range versus 84 dB for 14-bit. However, the real-world dynamic range of medium format sensors is limited by photon shot noise and read noise, typically capping at around 14 stops (about 84 dB). Once quantization noise is well below the sensor’s analog noise floor, increasing bit depth adds no practical dynamic range.

2. Myth: 16-Bit Prevents Banding in Edits It is often claimed that more bits reduce banding in gradients during aggressive post-processing. But in RAW files, the tonal resolution of a 14-bit file already exceeds the eye’s ability to detect steps, especially once converted to a working color space and edited in a 16-bit pipeline. Any banding in real workflows is usually due to limitations in output color space or lossy compression, not insufficient bit depth in the original capture. In addition, shot noise smears over the quantization noise.

3. Myth: 16-Bit is Better for Color Grading While more bits may benefit extreme color grading in video or scientific imagery, photographic sensors do not generate color information with 16-bit fidelity. The signal is already quantized, and color differences at the bottom 2 bits of a 16-bit file are buried in noise. Color precision is far more influenced by lens transmission, sensor design, and spectral response than bit depth.

4. Myth: 16-Bit is Needed for Future-Proofing Some argue that 16-bit data ensures longevity in the face of evolving editing software or display technologies. But if the source data carries no meaningful information in the bottom bits, storing them is like preserving empty decimal places. 14-bit files already provide more granularity than is practically usable for current sensors.

5. Myth: Scientific or Industrial Applications Justify 16-Bit While true for specialized imaging tasks like fluorescence microscopy or machine vision, these use cases have little in common with handheld photography. In those domains, exposure, temperature, and electronic noise are tightly controlled. In photography, the environment is uncontrolled and analog noise dominates.

Conclusion: The 16-bit RAW format in cameras like the GFX 100 series and Hasselblad X2D is more about marketing than measurable photographic benefit. While there is no harm in storing images in 16-bit format, it offers little to no advantage over 14-bit for dynamic range, tonal smoothness, or color accuracy. Photographers should base their expectations on physics and perceptual limits—not on file format headlines.

 

[INT. STUDIO – Nigel is showing off his computer setup with a smug grin.]

Nigel:
This one here—this is the RAW file. Not just any RAW file. This one’s 16-bit.

Marty (the director):
Right. And what’s the advantage?

Nigel:
Well, most people shoot in 14-bit, right? You got your shadows, your highlights… but 14 bits only gives you 16,384 levels. This—this gives you 65,536.

Marty:
Uh huh. But isn’t the sensor noise floor higher than the 14-bit quantization? I mean, can you really see any difference?

Nigel (nods slowly):
No. But it’s two bits more, innit?

Marty:
Why not just process the data better at 14 bits?

Nigel (pause):
But this goes to sixteen.

Marty:
I see. So… it’s not actually capturing more detail?

Nigel:
Well, no—but when you say you shoot sixteen, people listen.

Marty:
Couldn’t you just make 14-bit better, and call that louder?

Nigel:
[beat]
But… these go to sixteen.

The Last Word

← Diving deeper into cropping in the GFX 100RF and dynamic range Correcting Raw Black Point Errors with Lightroom’s Calibration Panel →

Comments

  1. Wedding Photographer in DC says

    May 7, 2025 at 1:19 pm

    The last bit gave me a chuckle. My husband would most certainly agree and I can almost hear him say “Told you so” haha

    Reply
  2. bob lozano says

    May 9, 2025 at 2:44 am

    For my part, going to 11 is enough…

    Seriously thx for the recap of the realities. If a time comes where there are adequate approaches to inferring / extrapolating another couple of bits of `precision`, then the resulting extrapolated image could always be stored in 16 bit at that time. I have my doubts however, since the human eye will be the ultimate arbiter of images, by definition.

    Reply
  3. Javier Sanchez says

    May 13, 2025 at 10:07 am

    Also worth noting that switching cameras like the GFX100S to 14bit make them significantly more enjoyable and usable by noticeably decreasing the viewfinder blackout time between shots.

    Reply
    • Stillton says

      July 24, 2025 at 8:38 am

      Could it be because Fuji designed their GFX camera as a 14bit system, and 16bits were shoehorned later, which created a bottleneck/performance issue?

      Reply
      • JimK says

        July 24, 2025 at 8:42 am

        Sony designed the sensor in all the 33x44mm Fuji and Hassy cameras. The 50 MP versions have 14 bits as the maximum precision. Hassy performs a color calibration process, and adds two guard bits to get to 16 bits. Fuji plays it straight and just uses the 14-bit values. The 100 MP versions have 14 and 16 bit modes (as well as others).

        Reply
  4. John Griffin says

    May 31, 2025 at 6:43 am

    I can only see it being of any use if you were using the camera to scan color negs where the captured /input tonal and color range is very small and needs to be highly stretched to fit the output.

    Reply
  5. Stillton says

    July 22, 2025 at 11:38 pm

    Earlier digital H backs (and not only H backs) did not have gain applied to the sampled images. I just checked my H4D-50 files and some ISO50 file show heavy use of the top portion of the histogram (in raw digger ) while iso200 images are crammed in the left corner. Taking 1/3 or 1/4 of what iso50 would take.

    If this was sampled at 14 bit, it would have produced a noticeable difference, after “digitally developing” the images. Having additional 2 bits makes it possible to increase sampling precision by a factor of x4 in this specific case. Since it is an integer-based system, and not a floating point based, I think it would make a difference.

    Reply
    • JimK says

      July 23, 2025 at 6:18 am

      The key issue is what was the read noise in those backs? My H series blads had so much read noise that the bottom five bits were useless.

      Reply
      • Stillton says

        July 23, 2025 at 7:36 am

        “had so much read noise”
        In which conditions? Looking at those raw histograms, it seems that it will be less of a problem in some use cases, like properly exposed images.

        I take it that the read noise is fairly stable. So, if I severely underexposed the image, I might get a lot of noise compared to the image. But what if I exposed correctly or ETTR? How many bits the noise would take?

        Also, film grain is akin to “read noise”, yet no one seriously argues that since the grain is perceived in the final scanned image, we should scan it at lower bit depth.

        Perhaps Hasselblad engs thought the same. Besides, there isn’t much difference between 14 and 16 bit data in terms of its occupied space on disk – 14% if no padding is used. (In RAM they are likely identical due to the padding).

        Reply
        • JimK says

          July 23, 2025 at 9:02 am

          “had so much read noise” In which conditions?

          20-25 degrees C. Shutter speeds faster than 1 second. Sloweer than one second, the RN gets worse.

          Reply
          • JimK says

            July 23, 2025 at 9:05 am

            I take it that the read noise is fairly stable. So, if I severely underexposed the image, I might get a lot of noise compared to the image. But what if I exposed correctly or ETTR? How many bits the noise would take?

            Read noise is unaffected by exposure. You can measure read noise with dark frames. So your proposal that you need more than, say, 14 bits for a camera whose 12th and lesser significance bits are read noise is not a solution to any photographic problem.

            Reply
            • Stillton says

              July 24, 2025 at 8:35 am

              Gim, I dont think you understood what I was saying. It is possible that I simply poorly communicated it.

              For a given bit depth, the increase in exposure should lead to a more precise digitization of a specific value.

              ” So your proposal that you need more than, say, 14 bits for a camera whose 12th and lesser significance bits are read noise is not a solution ”

              Let’s suppose that 0EV “brightness” is 12.207% of the maximum possible to register with a pixel/sensor. So, -1EV would be half that, and +1EV – double that. +2EV would be quadruple of 0EV. Ignore read noise for a moment.

              Theoretically, digitization error would be something like this for a given bit-depth and exposure level for that specific signal value:
              ————————————————————–
              bits | -1EV | 0EV | +1EV | + 2EV
              8bit 4.000% 0.800% 0.800% 0.800%
              10bit 0.800% 0.800% 0.400% 0.200%
              12bit 0.400% 0.200% 0.100% 0.050%
              14bit 0.100% 0.050% 0.025% 0.012%
              15bit 0.050% 0.025% 0.012% 0.006%
              16bit 0.025% 0.012% 0.006% 0.003%
              ————————————————————–

              So, ETTR yields the same improvement over normal exposure as more bit depth would. 1 bit corresponds to 1EV.

              Reply
              • JimK says

                July 24, 2025 at 8:40 am

                That is true, but doesn’t apply to the case where the read noise greatly exceeds the LSB of the ADC.

                Reply
        • JimK says

          July 23, 2025 at 9:08 am

          I don’t think you understand the concept of dither as it relates to photography. Maybe this will help.

          https://blog.kasson.com/?s=dither

          Reply
          • Stillton says

            July 24, 2025 at 8:36 am

            I am familiar with it. Wouldn’t read noise act as dither?

            Reply
            • JimK says

              July 24, 2025 at 8:39 am

              That is the point. Once you have sufficient dithering, more noise won’t help.

              Reply
  6. KM says

    October 26, 2025 at 8:35 am

    Color Precision: Where 16-bit Really Matters. Some counterarguments

    Tonal Resolution vs Chromatic Resolution

    The confusion stems from Kasson measuring primarily **tonal resolution** (ability to distinguish brightness levels), while the 16-bit advantage concerns **chromatic resolution** (ability to distinguish similar hues and saturations).

    In CIELab space used for Delta E measurements, a color is defined by three coordinates:
    – L* (lightness): 0-100
    – a* (green-red axis): -128 to +127
    – b* (blue-yellow axis): -128 to +127

    In 14-bit, each RGB channel has 16,384 values. After RGB→Lab transformation, this translates to roughly **0.4 Delta E units** resolution between adjacent values in critical color space regions.

    In 16-bit, with 65,536 values, this resolution improves to **0.1 Delta E units**.

    The human perception threshold is generally set at Delta E ≤ 1.0 (imperceptible difference) and Delta E ≤ 2.0 (barely perceptible). 14-bit thus operates near the perception limit in certain colorimetric regions.

    Critical Zones in Color Space

    1. Skin Tones (Memory Colors)

    The human visual system is extraordinarily sensitive to skin tones – a “memory color” calibrated since birth. Psychovisual tests show that variations of **Delta E 0.5-1.0** in skin tones are perceptible, while the same variation on a gray wall goes unnoticed.

    The skin tone region in Lab space is narrow:
    – L*: 50-80 (medium-light brightness)
    – a*: +10 to +25 (slightly red)
    – b*: +10 to +30 (slightly yellow)

    In 14-bit, this zone contains approximately **200-300 distinct colors**. In 16-bit: **800-1200 distinct colors**. During adjustments (white balance, hue correction), 16-bit maintains imperceptible transitions where 14-bit can create subtle chromatic “steps.”

    2. Skies and Atmospheric Gradients

    A twilight sky presents continuous gradients from:
    – Saturated blue (zenith) → desaturated blue → mauve → orange → red (horizon)
    – L* variation ~20 units across 30-40% of the image
    – Simultaneous a* and b* variation (hue AND saturation change)

    In 14-bit, this gradient contains ~200 distinct colors along the path. If post-processing applies a contrast curve (highlight compression), the number of available colors in the compressed zone can drop to **50-80 discrete values**, creating visible banding.

    In 16-bit, even after compression, 200-300 values remain, maintaining perceptual continuity.

    3. Near-Maximum Saturation Zones

    At gamut limits (highly saturated colors), the color space “folds.” Non-linear transformations are most aggressive here. A saturated red (a* ≈ +80, b* ≈ +70) approaching sensor clipping undergoes complex transformations during Phocus rendering.

    16-bit preserves more information in these extreme non-linearity zones, allowing the rendering engine to **make more nuanced decisions** on how to map out-of-gamut colors to reproducible gamut.

    Colorimetric Transformations and Error Propagation

    The HNCS workflow isn’t a simple RGB→Lab conversion. It’s a complex chain:

    1. **Bayer demosaicing** (interpolation ~3 neighboring pixels)
    2. **Optical profile correction** (channel-varying spatial transformations)
    3. **White balance** (3×3 matrix multiplication)
    4. **Transformation to proprietary HNCS space** (3D LUT ~17³ = 4913 points)
    5. **User adjustments** (curves, saturation, HSL)
    6. **Rendering to output space** (P3, ProPhoto, sRGB – another 3D LUT)

    Each step introduces **rounding errors**. In 14-bit, cumulative error after 6 transformations can reach 2-3 Delta E units in worst cases. In 16-bit, this error drops to 0.5-1.0 units – the difference between perceptible and imperceptible banding.

    3D LUTs and Tetrahedral Interpolation

    3D Look-Up Tables are at HNCS’s core. A 17³ point LUT divides RGB space into cubes. For an arbitrary RGB value, the system:

    1. Locates the enclosing cube (8 vertices)
    2. Subdivides into tetrahedra
    3. Interpolates linearly inside

    Interpolation error depends on transformation curvature and **distance to grid points**. With 14-bit input, values fall between grid points with an average spacing of 1024 levels (16384/17). With 16-bit, this spacing rises to 3855 levels (65536/17).

    Higher input resolution allows interpolation to capture non-linearities more precisely. For HNCS transformations optimized for the Delta E precision demonstrated in the document, 16-bit reduces LUT interpolation error by **0.3-0.5 Delta E units**.

    Why Kasson’s Tests Don’t Capture This

    Kasson’s SNR and dynamic range tests use **monochrome patches** (neutral grays, black, white). These targets test tonal resolution but ignore:

    – Continuous chromatic gradients (skies, skin)
    – Complex colorimetric transformations (HNCS)
    – Aggressive post-processing adjustments
    – Non-linearity zones near gamut

    A revealing test would be:
    1. Capture a ColorChecker in 14-bit and 16-bit
    2. Apply extreme white balance correction (+50 tint)
    3. Push saturation +40
    4. Measure resulting Delta E vs actual patch

    Hypothesis: 16-bit would maintain Delta E 0.2-0.4 units lower, particularly in skin tones and saturated zones.

    Practical Case: Skin Tone Adjustment

    Real fashion studio scenario:
    – Tungsten lighting captured in daylight WB (deliberate error)
    – Post WB correction: temperature -1500K, tint +25
    – HSL adjustment: orange +10 hue, +5 saturation
    – Tone curve: highlight compression

    In 14-bit, this sequence can reduce distinct colors in skin tone zone from **300 to ~80 values** after successive quantizations. Steps of 1.2-1.5 Delta E appear.

    In 16-bit, the same transformations leave **250-300 values**, maintaining 0.3-0.5 Delta E steps – below perceptual threshold.

    Conclusion: Colorimetric Quality Insurance

    16-bit doesn’t improve raw capture (Kasson is right). It preserves **chromatic resolution through transformations**. In color space regions critical for human perception (skin, skies, high saturations), after realistic adjustments, 16-bit maintains 3-4× more distinct colors than 14-bit.

    For “correct capture + light adjustments” workflow, the difference is negligible. For typical Hasselblad commercial workflow (aggressive corrections, HNCS color science, P3/HDR outputs), 16-bit is the difference between “perfect” and “nearly perfect” – exactly what an $11,000 system must guarantee.

    Reply
    • JimK says

      October 26, 2025 at 10:38 am

      You are completely ignoring read noise and photon noise. So you are missing my point. Once you have adequate dither, you don’t need more.

      There are many other points that you are making that I disagree with, but the above is the overarching issue.

      Reply
    • JimK says

      October 26, 2025 at 3:56 pm

      In 14-bit, each RGB channel has 16,384 values. After RGB→Lab transformation, this translates to roughly **0.4 Delta E units** resolution between adjacent values in critical color space regions.

      I ran a simulation to test your assertion.

      Here is the class comment:

      classdef SRGBQuantizationLabError
      %SRGBQuantizationLabError Worst-case ΔE from 14-bit sRGB quantization → CIELAB
      %
      % This class estimates the color error introduced when linear-light RGB
      % originates from ideal, continuous sRGB values in [0,1], then each
      % sRGB channel is quantized to 14-bit code values (16384 levels), and
      % the quantized sRGB is converted to CIELAB (double precision).
      %
      % Features
      % – Grid or random sampling over the sRGB cube
      % – ΔE*ab and ΔE00
      % – D65 or D50 Lab white, with optional Bradford adaptation from sRGB’s D65
      % – Reports maxima, percentiles, and locations of worst errors
      %
      % Usage example (grid search):
      % E = SRGBQuantizationLabError(‘LabWhite’,’D50′,’DeltaEMetric’,’DE00′);
      % R = E.runGrid(65); % 65^3 ≈ 274k samples
      % R.maxDE00, R.maxDE00_sRGB
      %
      % Usage example (random search, then refine):
      % E = SRGBQuantizationLabError(‘LabWhite’,’D50′,’DeltaEMetric’,’DE00′);
      % R1 = E.runRandom(1e6);
      % R2 = E.refineAroundWorst(R1, 2e5, 0.01); % 2e5 extra samples inside a 1% cube around worst
      %
      % Notes
      % – “14-bit sRGB” here means quantizing the *nonlinear* sRGB channels to
      % 14-bit codes. Conversions are then applied with standard sRGB
      % decoding → XYZ(D65) → (optionally adapted) → Lab(white).
      % – Double precision Lab eliminates any additional rounding error beyond
      % the initial 14-bit quantization in sRGB.

      Here is the result:

      R2 =

      struct with fields:

      metric: ‘DE00’
      LabWhite: ‘D50’
      UseBradford: 1
      SRGBLevels: 16384
      count: 200000
      maxDE: 0.0174
      maxLoc_sRGB: [0.0918 0.0911 0.0899]
      maxLoc_sRGB_q: [0.0918 0.0911 0.0900]
      maxLoc_Lab: [7.8661 0.0302 0.2371]
      maxLoc_Lab_q: [7.8644 0.0406 0.2293]
      meanDE: 0.0058
      rmsDE: 0.0065
      p95DE: 0.0111
      p99DE: 0.0132
      p999DE: 0.0152

      PPRGB is worse, but still much better than what you asserted:

      metric: ‘DE00’
      SourceRGB: ‘ProPhoto’
      LabWhite: ‘D50’
      UseBradford: 1
      Levels: 16384
      count: 200000
      maxDE: 0.0407
      maxLoc_sRGB: [0.0863 0.0868 0.0905]
      maxLoc_sRGB_q: [0.0863 0.0867 0.0905]
      maxLoc_Lab: [10.7331 -0.1108 -1.2262]
      maxLoc_Lab_q: [10.7307 -0.0840 -1.2363]
      meanDE: 0.0132
      rmsDE: 0.0150
      p95DE: 0.0270
      p99DE: 0.0322
      p999DE: 0.0363

      Reply
    • JimK says

      October 30, 2025 at 9:48 am

      In CIELab space used for Delta E measurements, a color is defined by three coordinates:
      – L* (lightness): 0-100
      – a* (green-red axis): -128 to +127
      – b* (blue-yellow axis): -128 to +127

      In the CIE definition,, a* and b* are unbounded.

      Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

November 2025
S M T W T F S
 1
2345678
9101112131415
16171819202122
23242526272829
30  
« Oct    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • Pieter Kers on Using Curves adjustment layers in Photoshop
  • Paul R on Exposure metering
  • XUE on Dark Current in CMOS Sensors: Where It Comes From, and How Cooling Helps
  • Paul R on ISO setting for raw files
  • JimK on Pixel Response Non-Uniformity: Fixed Pattern Noise in the Light
  • Jared Bush on Pixel Response Non-Uniformity: Fixed Pattern Noise in the Light
  • JimK on Fujifilm GFX 100S pixel shift, visuals
  • Christopher Roberton on Fujifilm GFX 100S pixel shift, visuals
  • JimK on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • Colin Surprenant on Averaging captures, precision effects

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.