• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / The Last Word / Does repeated JPEG compression ruin images?

Does repeated JPEG compression ruin images?

January 19, 2021 JimK 8 Comments

Note: this post has been extensively revised.

I keep reading assertions like this one that was just posted here:

When saving and opening a JPEG file many times in a row, compression will ruin your image.

I knew from hanging around with some of the IBMers working on the original JPEG standard that recompressibility with no change was one of the objectives of the standard. So what happened?

  1. Has the standard changed in that regard?
  2. Are people improperly implementing the standard?
  3. Is the above quoted statement wrong?

I ran a test.

  1. I opened a GFX 50R .psd file in Photoshop, flattened it, and saved it as a TIFF.
  2. I closed the file.
  3. I opened it in Matlab, and saved it as a JPEG with default quality setting.
  4. I opened the JPEG in Matlab.
  5. I converted the file to single precision floating point, and back to unsigned integer with 8 bits precision.
  6. I changed a pixel in the lower right corner to a value equal to the number of iterations.
  7. I saved that file as a JPEG under a new name.
  8. I repeated steps 4 through 7 100 times, giving me 100 JPEG files created serially from the original TIFF.
  9. I analyzed differences among the files as described below.

If you mask off the 8×8 pixel block containing the lower right corner, files 2 through 100 were identical, meaning that opening, modifying and saving a JPEG image many times does not change anything outside of the 8×8 block in which the change was made.

So the confident assertion of the Fstoppers writer is wrong.

It was interesting that the JPEG file created from the TIFF was slightly different from the first JPEG file created from a JPEG.

Here’s a histogram of the difference between the file created from the TIFF and the one created from the JPEG (it’s an RGB file, and I’m plotting the difference for all 3 planes, so there are three times as many entries as pixels in the file:

For roughly 130 million data bytes both JPEGs are equal. However, there are some difference bytes that are not zero. We can see them by removing all the zero bytes from the histogram:

There are fewer than 450 bytes that differ by one, and about 150 that differ by two, and a few that differ by 3.

Conclusions:

  • In Matlab, JPEG recompressions don’t “walk”; they are the same after the first iteration.
  • Changes from the first to the second iteration are small.
  • The effects of changing one 8×8 block do not affect the recompression accuracy of other 8×8 blocks.

I then created a chain of 4 JPEG images in Photoshop, each one derived from the previous one, and saved them at quality = 9.

The first one was different from the second in 480 pixels.

The rest were identical.

Conclusions

  • In Photoshop, JPEG recompressions don’t “walk”; they are the same after the first iteration.
  • Changes from the first to the second iteration are small.

Of course, it is not practical for me to conduct this experiment for all possible JPEG images.

The Last Word

← Leica 90/2 Apo-Summicron ASPH-M on GFX 50S Math in photography →

Comments

  1. Mike B says

    January 19, 2021 at 1:44 pm

    Good test Jim.
    To be fair to the poster, the test he ran was not the same as yours as he created a new jpg every time he saved and used that for the next, so he ended up with the 99th new version showing terrible artifacts.

    I would always go back to the raw to create a new (different) jpg so his test is a bit moot for me and perhaps many others.

    Reply
    • JimK says

      January 19, 2021 at 2:32 pm

      The test I ran, when continued, produces identical JPEGs ad infinitum.

      Reply
    • JimK says

      January 20, 2021 at 1:17 pm

      I would always go back to the raw to create a new (different) jpg so his test is a bit moot for me and perhaps many others.

      Here’s a possible, if a bit far-fetched, real-world situation that I don’t think would be a problem and the Fstopper article claims would be a disaster. Let’s say I get a JPEG from somebody to print. I look at it, and I discover a dust spot. I open up the file, fix the dust spot, and save it under the original name. Then the customer calls and tells me to remove a different disut spot. I open up the file, fix the dust spot, and save it under the original name. That sequence repeats itself a few times.

      Because the DCT takes place in 8×8 pixel blocks, as long as none of the dust spot edits were in the same block, they would all have been decompressed, changed, and recompressed only once, even though the whole file had been decompressed and recompressed many times.

      Reply
  2. CarVac says

    January 20, 2021 at 12:54 pm

    As long as the same quantization matrix is used, resaving it should be lossless.

    Maybe if you alternate between two different quality levels it’ll degrade more.

    Reply
    • JimK says

      January 20, 2021 at 1:11 pm

      As long as the same quantization matrix is used, resaving it should be lossless.

      Exactly so.

      Reply
    • David Berryrieser says

      January 28, 2021 at 6:51 pm

      My understanding is that it going from a low quality jpeg to a higher quality one will just add a bunch of zeros to the matrix, which would then just be lopped back off going back to lower quality.

      Reply
      • Gao Yang says

        January 31, 2021 at 4:56 pm

        It depends on the relationship between those quantization tables.
        If the high quality table has coefficients multiples of the low quality table, then the bahavior is as you’ve described. However, if for example, a pair of corresponding coefficients is 3 in the high quality table and 4 in the low quality table – re-compression using the high quality table could sometimes make things a little bit worse.

        Reply
  3. Ilya Zakharevich says

    January 20, 2021 at 8:06 pm

    > “Every pixel in the file is zero.”

    All I see is that vertically, every pixel on this plot means 2.6·10⁵ pixels on the original image. All that your graph shows is that there is <1.3·10⁶ pixels with difference 1, <1.3·10⁶ pixels with difference 2, <1.3·10⁵ pixels with difference 3, etc. So JUDGING BY YOUR PLOT, there may be A LOT of pixels with very large differences.

    You need a better metric than this plot to convince an observant reader!

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • bob lozano on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Goldilocks and the three flashes
  • DC Wedding Photographer on Goldilocks and the three flashes
  • Wedding Photographer in DC on The 16-Bit Fallacy: Why More Isn’t Always Better in Medium Format Cameras
  • JimK on Fujifilm GFX 100S II precision
  • Renjie Zhu on Fujifilm GFX 100S II precision
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • Ivo de Man on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF
  • JimK on Fuji 20-35/4 landscape field curvature at 23mm vs 23/4 GF

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.