I’ve written on this subject before, but I’ve not done a piece that deals with the common counterarguments. Here is one.
The Fujifilm GFX 100-series and Hasselblad X2D cameras support 16-bit RAW files. At first glance, this seems like an obvious win: more bits should mean more data, more dynamic range, and more flexibility in post-processing. But in practice, the benefits of 16-bit precision over 14-bit are negligible for photographic applications. Here are the arguments often made in favor of 16-bit capture and why they don’t hold up under scrutiny.
1. Myth: 16-Bit Provides More Dynamic Range A 16-bit file can, in theory, encode 96 dB of dynamic range versus 84 dB for 14-bit. However, the real-world dynamic range of medium format sensors is limited by photon shot noise and read noise, typically capping at around 14 stops (about 84 dB). Once quantization noise is well below the sensor’s analog noise floor, increasing bit depth adds no practical dynamic range.
2. Myth: 16-Bit Prevents Banding in Edits It is often claimed that more bits reduce banding in gradients during aggressive post-processing. But in RAW files, the tonal resolution of a 14-bit file already exceeds the eye’s ability to detect steps, especially once converted to a working color space and edited in a 16-bit pipeline. Any banding in real workflows is usually due to limitations in output color space or lossy compression, not insufficient bit depth in the original capture. In addition, shot noise smears over the quantization noise.
3. Myth: 16-Bit is Better for Color Grading While more bits may benefit extreme color grading in video or scientific imagery, photographic sensors do not generate color information with 16-bit fidelity. The signal is already quantized, and color differences at the bottom 2 bits of a 16-bit file are buried in noise. Color precision is far more influenced by lens transmission, sensor design, and spectral response than bit depth.
4. Myth: 16-Bit is Needed for Future-Proofing Some argue that 16-bit data ensures longevity in the face of evolving editing software or display technologies. But if the source data carries no meaningful information in the bottom bits, storing them is like preserving empty decimal places. 14-bit files already provide more granularity than is practically usable for current sensors.
5. Myth: Scientific or Industrial Applications Justify 16-Bit While true for specialized imaging tasks like fluorescence microscopy or machine vision, these use cases have little in common with handheld photography. In those domains, exposure, temperature, and electronic noise are tightly controlled. In photography, the environment is uncontrolled and analog noise dominates.
Conclusion: The 16-bit RAW format in cameras like the GFX 100 series and Hasselblad X2D is more about marketing than measurable photographic benefit. While there is no harm in storing images in 16-bit format, it offers little to no advantage over 14-bit for dynamic range, tonal smoothness, or color accuracy. Photographers should base their expectations on physics and perceptual limits—not on file format headlines.
[INT. STUDIO – Nigel is showing off his computer setup with a smug grin.]
Nigel:
This one here—this is the RAW file. Not just any RAW file. This one’s 16-bit.
Marty (the director):
Right. And what’s the advantage?
Nigel:
Well, most people shoot in 14-bit, right? You got your shadows, your highlights… but 14 bits only gives you 16,384 levels. This—this gives you 65,536.
Marty:
Uh huh. But isn’t the sensor noise floor higher than the 14-bit quantization? I mean, can you really see any difference?
Nigel (nods slowly):
No. But it’s two bits more, innit?
Marty:
Why not just process the data better at 14 bits?
Nigel (pause):
But this goes to sixteen.
Marty:
I see. So… it’s not actually capturing more detail?
Nigel:
Well, no—but when you say you shoot sixteen, people listen.
Marty:
Couldn’t you just make 14-bit better, and call that louder?
Nigel:
[beat]
But… these go to sixteen.
Wedding Photographer in DC says
The last bit gave me a chuckle. My husband would most certainly agree and I can almost hear him say “Told you so” haha
bob lozano says
For my part, going to 11 is enough…
Seriously thx for the recap of the realities. If a time comes where there are adequate approaches to inferring / extrapolating another couple of bits of `precision`, then the resulting extrapolated image could always be stored in 16 bit at that time. I have my doubts however, since the human eye will be the ultimate arbiter of images, by definition.
Javier Sanchez says
Also worth noting that switching cameras like the GFX100S to 14bit make them significantly more enjoyable and usable by noticeably decreasing the viewfinder blackout time between shots.
Stillton says
Could it be because Fuji designed their GFX camera as a 14bit system, and 16bits were shoehorned later, which created a bottleneck/performance issue?
JimK says
Sony designed the sensor in all the 33x44mm Fuji and Hassy cameras. The 50 MP versions have 14 bits as the maximum precision. Hassy performs a color calibration process, and adds two guard bits to get to 16 bits. Fuji plays it straight and just uses the 14-bit values. The 100 MP versions have 14 and 16 bit modes (as well as others).
John Griffin says
I can only see it being of any use if you were using the camera to scan color negs where the captured /input tonal and color range is very small and needs to be highly stretched to fit the output.
Stillton says
Earlier digital H backs (and not only H backs) did not have gain applied to the sampled images. I just checked my H4D-50 files and some ISO50 file show heavy use of the top portion of the histogram (in raw digger ) while iso200 images are crammed in the left corner. Taking 1/3 or 1/4 of what iso50 would take.
If this was sampled at 14 bit, it would have produced a noticeable difference, after “digitally developing” the images. Having additional 2 bits makes it possible to increase sampling precision by a factor of x4 in this specific case. Since it is an integer-based system, and not a floating point based, I think it would make a difference.
JimK says
The key issue is what was the read noise in those backs? My H series blads had so much read noise that the bottom five bits were useless.
Stillton says
“had so much read noise”
In which conditions? Looking at those raw histograms, it seems that it will be less of a problem in some use cases, like properly exposed images.
I take it that the read noise is fairly stable. So, if I severely underexposed the image, I might get a lot of noise compared to the image. But what if I exposed correctly or ETTR? How many bits the noise would take?
Also, film grain is akin to “read noise”, yet no one seriously argues that since the grain is perceived in the final scanned image, we should scan it at lower bit depth.
Perhaps Hasselblad engs thought the same. Besides, there isn’t much difference between 14 and 16 bit data in terms of its occupied space on disk – 14% if no padding is used. (In RAM they are likely identical due to the padding).
JimK says
20-25 degrees C. Shutter speeds faster than 1 second. Sloweer than one second, the RN gets worse.
JimK says
Read noise is unaffected by exposure. You can measure read noise with dark frames. So your proposal that you need more than, say, 14 bits for a camera whose 12th and lesser significance bits are read noise is not a solution to any photographic problem.
Stillton says
Gim, I dont think you understood what I was saying. It is possible that I simply poorly communicated it.
For a given bit depth, the increase in exposure should lead to a more precise digitization of a specific value.
” So your proposal that you need more than, say, 14 bits for a camera whose 12th and lesser significance bits are read noise is not a solution ”
Let’s suppose that 0EV “brightness” is 12.207% of the maximum possible to register with a pixel/sensor. So, -1EV would be half that, and +1EV – double that. +2EV would be quadruple of 0EV. Ignore read noise for a moment.
Theoretically, digitization error would be something like this for a given bit-depth and exposure level for that specific signal value:
————————————————————–
bits | -1EV | 0EV | +1EV | + 2EV
8bit 4.000% 0.800% 0.800% 0.800%
10bit 0.800% 0.800% 0.400% 0.200%
12bit 0.400% 0.200% 0.100% 0.050%
14bit 0.100% 0.050% 0.025% 0.012%
15bit 0.050% 0.025% 0.012% 0.006%
16bit 0.025% 0.012% 0.006% 0.003%
————————————————————–
So, ETTR yields the same improvement over normal exposure as more bit depth would. 1 bit corresponds to 1EV.
JimK says
That is true, but doesn’t apply to the case where the read noise greatly exceeds the LSB of the ADC.
JimK says
I don’t think you understand the concept of dither as it relates to photography. Maybe this will help.
https://blog.kasson.com/?s=dither
Stillton says
I am familiar with it. Wouldn’t read noise act as dither?
JimK says
That is the point. Once you have sufficient dithering, more noise won’t help.
KM says
Color Precision: Where 16-bit Really Matters. Some counterarguments
Tonal Resolution vs Chromatic Resolution
The confusion stems from Kasson measuring primarily **tonal resolution** (ability to distinguish brightness levels), while the 16-bit advantage concerns **chromatic resolution** (ability to distinguish similar hues and saturations).
In CIELab space used for Delta E measurements, a color is defined by three coordinates:
– L* (lightness): 0-100
– a* (green-red axis): -128 to +127
– b* (blue-yellow axis): -128 to +127
In 14-bit, each RGB channel has 16,384 values. After RGB→Lab transformation, this translates to roughly **0.4 Delta E units** resolution between adjacent values in critical color space regions.
In 16-bit, with 65,536 values, this resolution improves to **0.1 Delta E units**.
The human perception threshold is generally set at Delta E ≤ 1.0 (imperceptible difference) and Delta E ≤ 2.0 (barely perceptible). 14-bit thus operates near the perception limit in certain colorimetric regions.
Critical Zones in Color Space
1. Skin Tones (Memory Colors)
The human visual system is extraordinarily sensitive to skin tones – a “memory color” calibrated since birth. Psychovisual tests show that variations of **Delta E 0.5-1.0** in skin tones are perceptible, while the same variation on a gray wall goes unnoticed.
The skin tone region in Lab space is narrow:
– L*: 50-80 (medium-light brightness)
– a*: +10 to +25 (slightly red)
– b*: +10 to +30 (slightly yellow)
In 14-bit, this zone contains approximately **200-300 distinct colors**. In 16-bit: **800-1200 distinct colors**. During adjustments (white balance, hue correction), 16-bit maintains imperceptible transitions where 14-bit can create subtle chromatic “steps.”
2. Skies and Atmospheric Gradients
A twilight sky presents continuous gradients from:
– Saturated blue (zenith) → desaturated blue → mauve → orange → red (horizon)
– L* variation ~20 units across 30-40% of the image
– Simultaneous a* and b* variation (hue AND saturation change)
In 14-bit, this gradient contains ~200 distinct colors along the path. If post-processing applies a contrast curve (highlight compression), the number of available colors in the compressed zone can drop to **50-80 discrete values**, creating visible banding.
In 16-bit, even after compression, 200-300 values remain, maintaining perceptual continuity.
3. Near-Maximum Saturation Zones
At gamut limits (highly saturated colors), the color space “folds.” Non-linear transformations are most aggressive here. A saturated red (a* ≈ +80, b* ≈ +70) approaching sensor clipping undergoes complex transformations during Phocus rendering.
16-bit preserves more information in these extreme non-linearity zones, allowing the rendering engine to **make more nuanced decisions** on how to map out-of-gamut colors to reproducible gamut.
Colorimetric Transformations and Error Propagation
The HNCS workflow isn’t a simple RGB→Lab conversion. It’s a complex chain:
1. **Bayer demosaicing** (interpolation ~3 neighboring pixels)
2. **Optical profile correction** (channel-varying spatial transformations)
3. **White balance** (3×3 matrix multiplication)
4. **Transformation to proprietary HNCS space** (3D LUT ~17³ = 4913 points)
5. **User adjustments** (curves, saturation, HSL)
6. **Rendering to output space** (P3, ProPhoto, sRGB – another 3D LUT)
Each step introduces **rounding errors**. In 14-bit, cumulative error after 6 transformations can reach 2-3 Delta E units in worst cases. In 16-bit, this error drops to 0.5-1.0 units – the difference between perceptible and imperceptible banding.
3D LUTs and Tetrahedral Interpolation
3D Look-Up Tables are at HNCS’s core. A 17³ point LUT divides RGB space into cubes. For an arbitrary RGB value, the system:
1. Locates the enclosing cube (8 vertices)
2. Subdivides into tetrahedra
3. Interpolates linearly inside
Interpolation error depends on transformation curvature and **distance to grid points**. With 14-bit input, values fall between grid points with an average spacing of 1024 levels (16384/17). With 16-bit, this spacing rises to 3855 levels (65536/17).
Higher input resolution allows interpolation to capture non-linearities more precisely. For HNCS transformations optimized for the Delta E precision demonstrated in the document, 16-bit reduces LUT interpolation error by **0.3-0.5 Delta E units**.
Why Kasson’s Tests Don’t Capture This
Kasson’s SNR and dynamic range tests use **monochrome patches** (neutral grays, black, white). These targets test tonal resolution but ignore:
– Continuous chromatic gradients (skies, skin)
– Complex colorimetric transformations (HNCS)
– Aggressive post-processing adjustments
– Non-linearity zones near gamut
A revealing test would be:
1. Capture a ColorChecker in 14-bit and 16-bit
2. Apply extreme white balance correction (+50 tint)
3. Push saturation +40
4. Measure resulting Delta E vs actual patch
Hypothesis: 16-bit would maintain Delta E 0.2-0.4 units lower, particularly in skin tones and saturated zones.
Practical Case: Skin Tone Adjustment
Real fashion studio scenario:
– Tungsten lighting captured in daylight WB (deliberate error)
– Post WB correction: temperature -1500K, tint +25
– HSL adjustment: orange +10 hue, +5 saturation
– Tone curve: highlight compression
In 14-bit, this sequence can reduce distinct colors in skin tone zone from **300 to ~80 values** after successive quantizations. Steps of 1.2-1.5 Delta E appear.
In 16-bit, the same transformations leave **250-300 values**, maintaining 0.3-0.5 Delta E steps – below perceptual threshold.
Conclusion: Colorimetric Quality Insurance
16-bit doesn’t improve raw capture (Kasson is right). It preserves **chromatic resolution through transformations**. In color space regions critical for human perception (skin, skies, high saturations), after realistic adjustments, 16-bit maintains 3-4× more distinct colors than 14-bit.
For “correct capture + light adjustments” workflow, the difference is negligible. For typical Hasselblad commercial workflow (aggressive corrections, HNCS color science, P3/HDR outputs), 16-bit is the difference between “perfect” and “nearly perfect” – exactly what an $11,000 system must guarantee.
JimK says
You are completely ignoring read noise and photon noise. So you are missing my point. Once you have adequate dither, you don’t need more.
There are many other points that you are making that I disagree with, but the above is the overarching issue.
JimK says
I ran a simulation to test your assertion.
Here is the class comment:
Here is the result:
PPRGB is worse, but still much better than what you asserted:
JimK says
In the CIE definition,, a* and b* are unbounded.