This is a pretty basic question, but I’ve been asked it a few times, and so I’m creating a post to which I can refer people.
My raw file is x bytes, and when I develop it, it’s 3 times that big. How come?
I’ll first answer the question assuming both files are uncompressed.
Most raw files these days have 14-bit precision. But, since computers are mostly designed around, ahem, byte-sized chunks of data, those 14 bit values generally get padded to 16-bit length before writing the file. So, a 100 MP raw file will contain the metadata, including the JPEG preview image, and roughly 200 MB of image data. When the image is developed, the demosaicing process produces an R, a G, and a B value for each pixel in the raw file. If the developed image has 16-bit precision, that’s 6 bytes for each pixel, or 600MB. Thus the developed file is three times as large as the raw file.
Lossless compression changes the numbers, but not the ratio. The compression ratio, defined as the output file size over the input file size, varies with image content, but generally runs a little over 50%. So the raw file gets compressed to about 100 MB and the developed file to about 300 MB. Why isn’t the developed file more compressible? In a perfect world it would be, but the compression algorithms aren’t that smart.
Den says
If I open a RAW file and immediately save it as a DNG file it ends up smaller – what is happening in the situation?
JimK says
Compression.