Photoshop color space conversion accuracy

In yesterday’s post, we saw that model-based color space conversion accuracy as performed in Matlab using 64-bit floating point intermediate values, are dominated by the quantizing error of 16-bit per color plane images.

But what about the accuracy such conversions in Photoshop? I took a look.

I loaded the same test image that I used for yesterday’s experiments into Ps (thanks to Bruce Lindbloom for the image):

DeltaELrNoSharp

 

I took the image from its native sRGB to Adobe  (1998) RGB and back, using the Adobe (ACE) color engine, then I loaded the original image and the round-trip image into Matlab, converted both to Lab, and computed the deltaE for each pixel. Then I did that a few more times, starting with the last conversion and always comparing the latest image to the original one. I computed some stats on the deltaE image, and here’s what I got:

rt-srgb-argb-0th

That’s odd. There’s a fair amount of error — the worst case is about 6 DeltaE — but it doesn’t get much worse after the first iteration.

I made another graph using the result of the first round trip as a reference:

rt-srgb-argb-1th

That’s more what I expected the results to look like. What’s causing the large error on the first iteration? I looked at the DelatE image with the original image as the reference and the first iteration as the comparison, normalized so that the worst-case is full scale, and with a gamma of 2.2 added:

1stpassdiffg22

The worst errors are in the lightest areas. The worst of the worst occur in dark  areas, for the most part, but not all dark areas show high errors — the top part of the picture is dark and shows low errors. Dark blue seems to be difficult; the worst Macbeth chart error is in the dark blue patch, followed by the red one. The errors seem to occur at enough different area in the image to rule out gamut clipping, which shouldn’t happen with this pair of color spaces anyway.

I went back to the original image, converted it to Adobe RGB in Photoshop, and compared it to the original after both were converted to Lab. The errors were very close to those of the first round trip, meaning that we lost almost all the accuracy we were going to lose going from sRGB to Adobe RGB.

What gives? The red and blue primaries for sRGB are the same as those in Adobe RGB, and the AdobeRGB green primary is such that the gamut of sRGB in xy or u’v’ chromaticity space is entirely contained within the Adobe RGB gamut in those spaces. The two spaces share a white point. The nonlinearities may be different, but that shouldn’t affect the gamut.

Just to make sure, I converted the original image to Adobe RGB in Matlab, and measured the difference in Lab. Infinitesimal.

Then I took the original image to Lab in Photoshop, then back to sRGB, and got materially the same large errors. I did that iteration several more times, comparing with the result fo the first round trip, and got this:

rt-srgb-lab-1th2

Just like with the sRGB>Adobe RGB>sRGB Photoshop conversions, it’s the first conversation that causes the main errors.

 

Do color space conversions degrade image quality?

There is a persistent legend in the digital photography world that color space conversions cause color shifts and should be avoided unless absolutely necessary.  Fifteen years ago, there were strong reasons for that way of thinking, but times have changed, and I think it’s time to take another look.

First off, there are several color space conversions that are unavoidable. Your raw converter needs to convert from your camera’s “color” space to your preferred working space. I put the word “color” in quotes because your camera doesn’t actually see colors the way your eye does. Once the image is in your chosen working space, whether it be ProPhotoRGB, Adobe RGB, or — God help you — sRGB, it needs to be converted into your monitor’s color space before you can see it. It needs to be converted into your printer’s (or printer driver’s) color space before you can print it.

So what the discussion about changing color spaces is about is changing the working color space of an image.

The reason why changing the working color space used to be dangerous is that images were stored with 8 bits per color plane. That was barely enough to represent colors accurately enough for quality prints, and not really enough to allow aggressive editing without creating visible problems. To make matters worse, different color spaces had different problem areas, so moving your image from one color space to another and back could cause posterization and the dreaded “histogram depopulation”.

Many years ago, image editors started the gradual migration towards (15 or) 16 bit per color plane representation, allowing about (30,000 or) 60,000 values in each plane rather than the 256 of the 8-bit world. This changed the fit of images into editing representations from claustrophobic to wide-open. Unless you’re trying to break something, there is hardly a move you can make that’s going to cause posterization.

But the fear of changing working spaces didn’t abate. Instead of precision (the computer science word for bit depth) being the focus, the spotlight turned to the conversion process itself being inaccurate.

Before I get to that, there’s another thing I need to get out of the way. Not all working spaces can represent all the colors you can see. The ones that can’t don’t exclude the same set of colors. So, if you’ve got an image in, say, Adobe RGB, and you’d like to convert it to, say, sRGB, if there are colors in the original image that can’t be represented in sRGB, they will be mapped to sRGB colors. If you decide to take your newly sRGB image and convert it back to Adobe RGB, you won’t get those remapped colors back. one name for this phenomenon is gamut clipping.

There are two ways of specifying color spaces. The most accurate way is to specify a mathematical model for converting to and from some lingua franca color space such as CIE 1931 XYZ or CIE 1976 CIEL*a*b*. If this method is used, assuming infinite precision for the input color space and for all intermediate computations, perfect accuracy is theoretically obtainable. Stated with the epsilon-delta formulation beloved my mathematicians the world over, given a color an allowable error epsilon, there exists a precision, delta, which allows conversion of a given color triplet between any pair of model based spaces, assuming that the color can be represented in both spaces. Examples of model-defined color spaces are Adobe (1998) RGB, sRGB, ProPhoto RGB, CIEL*a*b*, and CIEL*u*v*.

The other way to define a color space is to take a bunch of measurements and build three-dimensional lookup tables for converting to and from a lingua franca color space. These conversions are inherently inaccurate, being limited by the accuracy of the measurement devices, the number of measurements,  the number of entries in the lockup table, the precision of those entries, the interpolation algorithm, the stability of the device itself, and the phase of the moon. Fortunately, but not coincidentally, all of the working color spaces available to photographers are model-based.

I set up a test. I took an sRGB version of this image of Bruce Lindbloom’s imaginary, synthetic, desk:

DeltaELrNoSharp

I brought it into Matlab, and converted it to 64-bit floating point representation, with each color plane mapped into the region [0, 1].

I converted it to Adobe RGB, then back to sRGB, and computed the distance between the original and the round-trip-converted image in CIELab DeltaE. I measured the average error, the standard deviation, and the worst-case error and recorded them.

Then I did the pair of conversions again.

And again, and again, for a total of 100 round trips. Here’s the code:

rtCode

Here’s what I got:

meanFPRT sigmaFPRT wcFPRT

The first thing to notice is how small the errors are. One DeltaE is roughly the amount of difference in color that you can just notice. We’re looking at worst-case errors after 100 conversions that are five trillionths of that just-noticeable difference.

Unfortunately, the working color spaces of our image editors don’t normally have that much precision. 16-bit integer precision is much more common. If we run the program above and tell it to convert every color in the image to 16-bit integer precision after every conversion, this is what we get:

rtsigmavsiter rtwcvsiter

It’s a lot worse, but the worst-case error is still about 5/100 of a DeltaE, and we’re not going to be able to see that.

How do the color space conversion algorithms in Photoshop compare to the ones I was using in Matlab. Stay tuned.

 

 

 

 

Noise reduction with nonlinear tools and downsampling

In the last half-dozen posts, I’ve explored the noise reduction effects and quality of the results from six or seven different downsampling methods. From doing that work, I’ve concluded that there are several choices for someone with a small-pixel camera to produce images that have similar noise characteristics as those made with a large-pixel camera, after the small-pixel images are down res’d to the resolution of the large-pixel camera, as stated by photographic equivalence.

The above depends on the FWC of the cameras being proportional to the square of the pixel pitch. It further depends on an assumption about read noise. If the read noise of the cameras involved is proportional to the pixel area, then the sun and moon align and the noise part of equivalence is handled.

But the above assumption about read noise is suspect. I’ll make one that’s pretty far out in the other direction. Let’s assume the read noise is independent of pixel size. That gives a real advantage to the large pixel cameras. Is it enough for them to win the noise battle with their small-pixel brethren at the same downsampled resolution?

Not if you throw non-linear noise reduction techniques like those baked into Lightroom and Adobe Camera Raw.

Here’s how I know. I used my camera simulator to make two “photographs” of Bruce Lindbloom’s desk, with the sim set to ISO 3200, with a 1.25 micrometer (um) pixel pitch, and a 5 um pixel pitch.

The other important parameters that I used are:

  • Simulated Otus 55mm f/1.4 lens
  • Aperture: f/5.6
  • Bayer CFA with sRGB filter characteristics
  • Fill factor = 1.0
  • 0.375 pixel phase-shift AA filter
  • Diffraction calculated at 450nm for blue plane, 550nm for the green plane, and 650nm for the red plane
  • Full well capacity = 1600 electrons per square um
  • Read noise sigma = 1.5 electrons
  •  Bilinear interpolation demosaicing

The first two images have been enlarged 3x using nearest neighbor before being JPEG’d. The second two are enlarged by 9x using the same method.

Here’s the scene as captured by the camera above with the pixel pitch set to 5 um:

LindbloomAAISO3200Pitch5-3x

And at 1.25 um, processed in Lr with luminance noise reduction set to 100, chroma noise reduction set to 100, and exported with 25% magnification by Lightroom with no sharpening:

LindbloomAAISO3200Pitch133xLrNR

Now let’s zoom in:

With the 5 um camera:

LindbloomAAISO3200Pitch59x

And the 1.25 um camera:

LindbloomAAISO3200Pitch13k9xLrNR

 

Being able to use nonlinear noise reduction on the higher resolution camera is a game-changer. The noise in the lower image is actually lower than that in the upper one. However, there’s much more detail in the lower image. The ugly zipper artifacts from the demosaicing are completely gone.

What if we try to do some noise reduction on the image from the big-pixel camera. Here’s one attempt:

Lindx9AAISO3200Pitch545pctmedian

We lose sharpness and still can’t get the noise down to where it is in the 1.25 um image.

And this was after making some assumptions about read noise that stacked the deck in favor of the big-pixel camera.

 

 

Noise performance of downsampling with eliptical weighted averaging

Over on the Luminous Landscape forum, Nicolas Robidoux, Bart van der Wolf, Alan Gibson, and others have been working on developing downsampling algorithms. They’re using the command-line image editor ImageMagick as the engine to do the work, so their methodology is to write scripts that call ImageMagick. Some of their algorithms are producing impressive results, some of which you’ve seen in the last few posts.

I took one of Bart’s scripts, version 1.2.2, and ran it on a Gaussian noise image. I’ll report the results in this post, but first I’d like to let Bart tell you something about the algorithm. If digital filter design isn’t your thing, fee free to skip ahead.

On my occasional perusals on the ImageMagick discourse server, I read about some of Nicolas Robidoux’s ideas about halo suppression, in which he suggested temporarily adjusting the tone curve before resampling and restoring the original tone curve after resampling. Having tested those ideas when Nicolas was still experimenting with sigmoidal contrast, I saw some benefits, but also lack of control over shadows and highlights with a single parameter. Now that he declared that round-trip to Sigmoidal contrast dead, and introduced separate gamma corrected resampling with blending, I was convinced about the usefulness of the principle.

When Nicolas suggested to investigate further, and with no takers, I volunteered to create a tool that allowed to test the concept a bit further and in a more structured manner.

All resampling methods that are used in the script, are based on Elliptical Weighted Averaging or EWA (therefore in two dimensions) instead of a two pass orthogonal tensor approach. This has the benefit of producing more ‘organic looking’ resizing because the resulting resolution is circularly symmetric, rather than having a higher diagonal resolution that is possible in a square pixel grid. BTW, EWA resampling also has benefits for other types of (asymmetrical or variable) distortion.

A drawback of EWA resampling, besides the higher computational cost of having to process more samples, is that a (nearly) No-Op (scale close to 1.0) resampling will be slightly blurry, instead of having no effect on sharpness. Therefore, and for adaptability to different subject matter and viewing conditions, a sharpening parameter is added which allows to modify the behavior of the resampling and windowing filter used.

There are different filter methods available for upsampling and for down-sampling, and they are chosen because they perform better (less artifacting) for the intended scaling. The script is set up to allow choosing either filter method regardless of the fact of upsampling or down-sampling, but that’s more for experimentation purposes.

The actual resampling results are a linear gamma space blend between two resampling operations. The resampling operations are performed in two different gamma spaces in order to cope with the different halo under-/overshoot amplitudes. The blending of those in linear gamma space is luminance driven which allows to address the different halo tendencies in darker and lighter tones, and it reduces the risk of clipping.

This is all based on a proposal by Nicolas Robidoux, after experimenting with other temporary contrast adjustment functions based on sigmoidal tone curves. The approach with resampling in separate gamma spaces allows to better target the suppression of halo under/overshoots in different luminance ranges.

The upsampling operation (labeled as ‘generic’, because it also does pretty decent down-sampling) is implemented by using the EWA version of ‘Lanczos’ filtering, also known as ‘Jinc’ filtering, to make it circularly symmetric when simply scaling both dimensions by the same factor. Some ‘deblur’ is used, the amount is user adjustable. The deblur is controlled by modification of the filter support and window size.

The down-sampling operation is implemented by using the EWA version of ‘Cubic’ filtering, with an adjusted parameter choice to use a relatively soft filter version (from the family of Keys filter parameters), which is achieved with the following ‘-define filter:c=0.1601886205085204 -filter Cubic’. That defines a somewhat blurry filter, but also relatively halo and ringing free, which helps because down-sampling needs to actively avoid aliasing and ringing artifacts. It also blends the results of two resampling operations that were done in different gamma spaces, to allow and control halo generation.

To compensate for the softness/blur of the Cubic filter version, deconvolution sharpening is added. The deconvolution uses a Gaussian blur based weighting, and is implemented as a Unity kernel subtracted by a Gaussian blur, with the ImageMagick DoG (difference of Gaussians) function, which produces a 2-D convolution filter kernel.

There is a different down-sampling operation that is used when the choice is made to eliminate all sharpening, e.g. because one wants to use a separate utility for that. That operation only uses the EWA version of the ‘Quadratic’ filter in linear gamma space, without blending with other gamma space results. That filter produces very clean, essentially halo free, results (thus the possibility to skip blending with non-linear gamma space resampling) with very little aliasing but it’s a bit blurry, and will therefore allow significant (e.g. local) sharpening without risk of enhancing existing halos. As a bonus, an experimental option is added to the script to add a simple separate deconvolution sharpening operation, when zero or ‘negative sharpening’ (i.e. blur) was used.

All operations are executed in the spatial domain, to circumvent potential issues with limited 16-bit precision calculations that will create artifacts in the (Fourier converted) frequency domain. The future versions of ImageMagick will apparently also be available as already complied binaries that allow floating point Fourier transforms, which may help with faster and more precise calculations. It may also generate some other resampling options.

The deblur = 50 setting produces a fairly flat lowpass curve when fed a white noise signal, and it does not vary materially with downsizing ratio:

Bartp5sharp50

That deblur setting produces somewhat greater then the ideal noise reduction upon downsizing.

rmsnoiseBartsharp50

With deblur set to 100, there is a mild peak in the transfor function:

Bartp5sharp100

And the noise reduction is precisely on the ideal line:

rmsnoiseBartsharp100wpergect

 

 

 

Comparing downsampling algorithms, Fuji still life

This is the fourth in a series of posts showing images that have been downsampled using several different algorithms

  • Photoshop’s bilinear interpolation
  • Ps bicubic sharper
  • Lightroom export with no sharpening
  • Lr export with Low, Standard, and High sharpening for glossy paper
  • A complicated filter based on Elliptical Weighted Averaging (EWA), performed at two gammas and blended at two sharpening levels

The last algorithm is what I consider to be the state of the art in downsampling, although it is a work in progress. It’s implemented using a script that Bart van der Wolf wrote for ImageMagick, an image-manipulation program with resampling software written by Nicholas Robidoux and his associates.

This post using a Fuji demonstration image. This is the first target image that is actually photographic, in that it was captured by an actual, not a simulated, camera.

Here’s the whole target:

FR4blog

Now I’ll show you a series of images downsampled to 15% of the original linear dimensions with each of the algorithms under test, blown up again by a factor of 4 using nearest neighbor, with my comments under each image.

 

Bilinear interpolation

Bilinear interpolation

Bilinear interpolation, as implemented in Photoshop, is a first-do-no-harm downsizing method. It’s not the sharpest algorithm around, but it hardly ever bites you with in-your-face artifacts. That’s what we see here.

Bicubic Sharper

Bicubic Sharper

Photoshop’s implementation of bicubic sharper, on the other hand, is a risky proposition. Look at the halos around the flower stems, the flowers themselves, the clock, and just about everywhere.

Lightroom Export, No Sharpening

Lightroom Export, No Sharpening

With the sharpening turned off, Lightroom’s export downsizing is, as usual, a credible performer. It’s a hair sharper than bilinear — though in this image the two are very close — and shows no halos, or any other artifacts that I can see.

I’ll skip over the various Lightroom sharpening options, and just include the images at the end. We’ve seen before that these don’t provide better performance than no sharpening when examined at the pixel-peeping level, although they might when printed.

EWA, deblur = 50

EWA, deblur = 50

For this crop, EWA looks a lot like Lightroom’s export processing, but with some lightening of the first third of the tone curve in high-spatial frequency areas. Look at the clock near the white flower, and the green stems near the yellow flower at the upper left corner.

 

EWA, deblur  = 100

EWA, deblur = 100

Withe the deblur dialed up to 100, the image crisps up nicely. The downside is mild haloing around the clock and the stems.

Lightroom Export, Low Sharpening

Lightroom Export, Low Sharpening

Lightroom Export, Standard Sharpening

Lightroom Export, Standard Sharpening

Lightroom Export, High Sharpening

Lightroom Export, High Sharpening

In general, the differences with this scene are less striking than with the artificial targets used in previous posts.

Comparing downsampling algorithm noise performance

In previous posts, we’ve seen numerical results of how various downsampling algorithms deal with Gaussian noise. Now it’s time to look at some pictures.

The algorithms are:

  • Photoshop’s bilinear interpolation
  • Ps bicubic sharper
  • Lightroom export with no sharpening
  • Lr export with Low, Standard, and High sharpening for glossy paper
  • A complicated filter based on Elliptical Weighted Averaging (EWA), performed at two gammas and blended at two sharpening levels

I haven’t posted noise graphs on the EWA algorithms, but I will do so soon.

The target image is Bruce Lindbloom’s desk, as captured by my camera simulator at ISO 3200, with a 1.25 micrometer (um) pixel pitch, producing an image that is quite noisy.

The other important parameters that I used in this run with the camera simulator.

  • Simulated Otus 55mm f/1.4 lens
  • Aperture: f/5.6
  • Bayer CFA with sRGB filter characteristics
  • Fill factor = 1.0
  • 0.375 pixel phase-shift AA filter
  • Diffraction calculated at 450nm for blue plane, 550nm for the green plane, and 650nm for the red plane
  • Full well capacity = 1600 electrons per square um
  • Read noise sigma = 1.5 electrons
  •  Bilinear interpolation demosaicing

All images have been enlarged 3x using nearest neighbor before being JPEG’d.

Here’s the scene as captured by the camera above with the pixel pitch set to 5 um:

 

5 um camera

5 um camera

And here are images from the 1.25 um version of the camera (which are quite a bit noisier than the image from the camera with the larger pixels) downsampled to the resolution of the coarser-pixel camera. Because the read noise is assumed constant, in general there is more noise in the 1.25 um camera images even after downsizing than in the 5 um camera image.

 

Bilinear interpolation

Bilinear interpolation

Credible noise performance.

BiCubic Sharper

BiCubic Sharper

The sharpening emphasizes the noise.

EWA with deblur= 50

EWA with deblur= 50

Less noise than the bilinear interpolation image. Shadow areas are lighter.

EWA with deblur = 100

EWA with deblur = 100

About the same noise as the bilinear interpolation image, but crisper.

Lightroom Export No Sharpening

Lightroom Export No Sharpening

Very slightly more noise than EWA with deblur = 50.

Lightroom Export Low Sharpening

Lightroom Export Low Sharpening

More noise.

Lightroom Export StandardSharpening

Lightroom Export StandardSharpening

Still more noise.

Lightroom Export High Sharpening

Lightroom Export High Sharpening

Even more noise

Note that at this resolution, the paper clips have turned into smoke.

Comparing Downsampling algorithms — Lindbloom’s desk

This is the second in a series of posts showing images that have been downsampled using several different algorithms

  • Photoshop’s bilinear interpolation
  • Ps bicubic sharper
  • Lightroom export with no sharpening
  • Lr export with Low, Standard, and High sharpening for glossy paper
  • A complicated filter based on Elliptical Weighted Averaging (EWA), performed at two gammas and blended at two sharpening levels

The last algorithm is what I consider to be the state of the art in downsampling. It’s implemented using a script that Bart van der Wolf wrote for ImageMagick, an image-manipulation program with resampling software written by Nicholas Robidoux and his associates.

The test target I’m using is a ray-traced image of an imaginary version Bruce Lindbloom’s desk, resized to 15% of its original linear dimensions. This image, being synthetic, has no photon noise and is thus a good way to judge the performance of the various algorithms without regard to how they deal with noise.

Here’s the whole image.

DeltaEp2s50

Here are crops from the downsampled images after having been enlarged 400% using nearest neighbor and JPEG’d. My comments are below each image.

 

Nicubic Sharper

Bicubic Sharper

There is haloing visible on the edges of the cubes in the color solid and in some of the patches in the Munsell chart.  There is a striking change in apparent brightness of the parts of the paper clips that are near the black parts of the desktop. In fact, all the downsampled image suffer from this defect to a greater or lesser degree. There are halos around the black figures on the desktop. This is not good performance.

Bilinear Interpolation

Bilinear Interpolation

There are no halos around the Munsell patches or the color solid cubes. The edges of the Munsell patches are soft. There are some brightness anomalies in the paper clips, but they’re nowhere near as bad as with bicubic sharper.

EWA deblur = 50

EWA deblur = 50

The EWA image with moderate deblur has the most realistic depiction of the paper clips. There is no haloing at all. Very good performance.

EWA deblur = 100

EWA deblur = 100

Stepping up the deblurring to 100 surprisingly doesn affect the paper clips much. There is a hint of aliasing in the yellow Munsell patch.

Lightroom Export High Sharpening

Lightroom Export High Sharpening

The paper clips are even worse then bicubic sharper. There is distinct haloing. The haloing on the magenta Munsell patch is associated with a hue shift.

Lightroom Export Low Sharpening

Lightroom Export Low Sharpening

Not too bad performance on the paper clips. A little haloing, but also not bad.

Lightroom Export No Sharpening

Lightroom Export No Sharpening

More modulation of the paper clip brightness by the background that EWA with deblur = 50. Image is less punchy, too, but I think it’s in second place.

Lightroom Export Standard Sharpening

Lightroom Export Standard Sharpening

As you might expect, halfway between Lightroom Low and Lightroom High sharpening. Too much haloing for my tastes, but might be OK if printed to a printer that rolls off the high spactial frequencies.

Comparing downsampling algorithms — ISO 12233

For the next few posts, I’ll be showing images that have been downsampled using several different algorithms

  • Photoshop’s bilinear interpolation
  • Ps bicubic sharper
  • Lightroom export with no sharpening
  • Lr export with Low, Standard, and High sharpening for glossy paper
  • A complicated filter based on Elliptical Weighted Averaging (EWA), performed at two gammas and blended at two sharpening levels

The last algorithm appears to me to be pretty much the stat of the art in downsampling. It’s implemented using a script that Bart van der Wolf wrote for ImageMagick, an image-manipulation program with resampling software written by Nicholas Robidoux and his associates. I’ll be reporting on it later; I’m still working out some of the details with Bart.

The first test chart I’m using is the ISO 12233 chart, resized to 10% of its original linear dimensions. This is useful to see how much aliasing the various algorithms allow, and also to look for edge artifacts. The chart I used is a low-contrast version so that overshoots will be visible.

12233LoCp1s50

I’ll show you crops that have been enlarged 4x using nearest neighbor and JPEG’d. If you want to see the original, uncompressed Photoshop stack, please contact me.

Bilinear interpolation

Bilinear interpolation

A lot of aliasing, extending from the slanted edges marked with 4 clear through 10. The worst of all the algorithms in this regard. No haloing, no crunchiness.

 

Bicubic Sharper

Bicubic Sharper

There is quite a bit of aliasing in the slanted lines marked with 4. Aliasing less than bilinear, but still visible in lines marked 10.Haloing is visible around the slanted edge and the crop marks. Numbers are crunchy.

Lightroom, no sharpening

Lightroom, no sharpening

Almost no haloing. Aliasing low in lines marked with 4, and invisible above that.

Lightroom, low sharpening

Lightroom, low sharpening

Some haloing. The sharpening makes some high frequency aliasing visible, but not bothersome.

Lightroom stnadard sharpening

Lightroom standard sharpening

A little haloing. A little more aliasing visible.

Lightroom, high sharpening

Lightroom, high sharpening

Distinct haloing, if you’re looking for it, but not bad compared to bicubic sharper.

EWA, deblur = 100

EWA, deblur = 100

Areas with high spatial frequency are lighter than with the other methods. Aliasing slightly less that Lr with no sharpening. Very slight haloing.

EWA deblur 50

EWA deblur 50

No haloing. Best control of aliasing of all. Best delineation of slanted lines labeled 2

Stay tuned for more images.

Lightroom downsizing: export sharpening noise effects

Yesterday I reported on the amount of noise, and the spectra, of noisy test images exported from Lightroom with sharpening turned off. Today we’ll see what happens when it’s set.

Here’s the test protocol. The target image is a 4000×4000 sRGB, with each plane filled with a constant signal of half-scale, (0.5, 127.5, or 32767.5, depending on how you think of it), with Gaussian noise with standard deviation of 1/10 scale added to it. The image was created in Matlab, written out as a 16-bit TIFF, imported into Lightroom, downsized by various amounts as it was exported as a 16-bit TIFF, read back into Matlab, and the green plane analyzed there. All three amounts of sharpening were tested.

RMS noise, aka standard deviation, of the downsampled images as measured in sRGB’s gamma of 2.2:

rmsnoiseLrsharpWperfectGraph

As expected, the sharpening increases the noise, with more sharpening increasing the noise more.

What do the spectra look like?

LrGPStd

LrGPLo

LrGPHi

The filters are quite restrained, especially when compared to the meat-axe that is Photoshop’s bicubic sharper:

BiCuS50pct

Things look pretty much the same at other downsizing ratios:

Lrp9GPHi

Lrp1GPHi

 

 

Noise effects in Lightroom downsized exporting

In the last two posts, I delved into how well Photoshop (Ps) does in minimizing photon noise when downsizing images using bilinear and bicubic sharper interpolation. Today I’m turning my attention to Lightroom (Lr).

With Lr, you don’t get to choose your resampling algorithm or the gamma of the space in which Lr does the resampling. You just specify the size of the output image and let Lr do its thing. Fortunately, it does some very good things, IMHO better than either of Ps’s recommended downsizing algorithm.

I fed Lr my usual 4000×4000 test image, with a half-scale constant (dc, if you will) and tenth-scale standard deviation Gaussian noise added. The space was Gray Gamma 2.2. Lr can’t export in that space, or, if it can, I exported as uncompressed TIFF with sharpening turned off.. I can’t find that space in the drop down menu in the export dialog. So I had it export in sRGB, which has the same gamma, and I threw away the red and blue planes after I got the image into Matlab for analysis.

Here is the ac rms value (aka standard deviation) of exported images at various magnifications:

rmsnoiseLrGraph

I added orange dots corresponding to ideal noise behavior:

rmsnoiseLrWperfectGraph

The noise reduction is in all.cases slighter greater than the ideal case. This behavior made me suspect that Lr was attenuating some of the higher spatial frequencies. I took a look.

Lrp95 Lrp9 Lrp8

Lrp5 Lrp2 Lrp1

This is great performance. The post-sampling spectra are substantially independent of the magnification. I looked at all the other spectra and they all looked materially the same.

There’s about three dB of high-frequency attenuation. I will experiment with Lr sharpening setting to see what they do the noise level and the spectra.