Photographers often talk about exposure. It is one of the most familiar concepts in photography. But there is another important idea that hides just behind the curtain: the total amount of light that lands on your sensor during an exposure. Understanding the difference between exposure and total light can clarify a lot of confusion about image quality, sensor size, and noise performance. Let’s walk through the distinction and then tie the two together with a simple equation.
Exposure: Illuminance Over Time
When you hear “exposure,” think of how much light each unit area of the sensor receives. It is the product of scene illuminance (measured in lux, or lumens per square meter) and shutter time (in seconds). Multiply the two, and you get luminous exposure, with units of lux·seconds. This quantity tells you how much luminous energy lands on each square meter of sensor surface during the exposure. It is an intensity measure: not total light, but light per unit area. This means that two cameras with different sensor sizes but the same exposure settings (same f-number and shutter speed, under the same lighting) will receive the same amount of light per square meter. That is why exposure controls image brightness regardless of sensor size.
Total Light: Bringing in Sensor Area
While exposure is about what each square meter of sensor sees, sometimes we care about the total luminous energy collected by the entire sensor. That depends not only on the exposure, but also on how big the sensor is.
Here is the key equation:
Total luminous energy (in lumen·seconds) = Illuminance (lux) × Exposure time (s) × Sensor area (m²)
This is simply luminous exposure (lux·s) multiplied by area (m²). The area units cancel:
- lux is lumens per square meter
- multiply by seconds to get lumens·s/m²
- then multiply by square meters to cancel out area
- you are left with lumen·seconds, a unit of total luminous energy
This tells us how much total light the sensor has captured during the exposure.
Why This Matters
Let’s compare two cameras under the same scene lighting and exposure time. One has a full-frame sensor, and the other has a Micro Four Thirds sensor. The illuminance on both sensors is the same, and so is the shutter time. Their exposure is the same. But the full-frame sensor is about four times larger in area. That means it collects about four times as much total luminous energy. More light means more signal. All else equal, that translates to better dynamic range and lower noise. This is the reason larger sensors often outperform smaller ones in challenging lighting conditions. It is not because the exposure is higher, but because they gather more total light.
Some photographers say that larger sensors collect more light because they are exposed differently. That is not correct. If two sensors have the same f-number and shutter speed, they receive the same exposure. What differs is the total light collected across their surfaces. Another confusion arises with ISO. People often think smaller sensors require higher ISO to achieve the same exposure. But ISO does not affect the amount of light hitting the sensor. It controls how the camera amplifies the signal after the exposure has already happened. The key distinction is this: exposure describes how much light each square meter of sensor receives. Total luminous energy describes how much light the entire sensor gathers. One is an intensity; the other is an integral over area.
Summary
Total luminous energy = Exposure × Sensor area
Exposure controls brightness. Sensor area controls how much total light is collected. More area means more signal, which means less noise, greater dynamic range, and better image quality when the exposure time and f-number are held constant. Understanding this distinction helps make sense of “equivalence” arguments in photography. When people debate full-frame versus APS-C or medium format, they are often talking past each other because they conflate exposure with total light. Keeping those two separate—one in lux·seconds, the other in lumen·seconds—goes a long way toward clearing up the confusion.
Jeffrey Horton says
I’m asking here because it is your newest post. I apologize if this doesn’t necessarily relate to this post.
My question is regarding the differences in image quality, color between FSI and BSI sensors. In the last few years I’ve shot with a variety of primarily BSI sensors such as the Nikon Z7, Z7II, Z8, Hasselblad XD1 II, XD2, Leica M11.
I find the color coming straight out of the camera to vary quite a bit for each camera, with the Hasselblad having what I perceive as the best color. Nikon probably being second, Leica M11 having some Magenta color cast. With some work in Lightroom I’m able to bring all the files to similar results, but Hasselblad definitely has the best results with now color correcting.
That being said, I previously used a Nikon D810 and this camera had the most amazing color straight out of camera.
I think Nikon had the advantage of using that same sensor in the D800, D800E before releasing the D810 and maybe they just had a lot of time to perfect the color. I wanted to ask, have you written any articles on the differences in image quality, and or color between FSI and BSI sensors?
Thank you!
JimK says
Back-side illuminated (BSI) sensors improve quantum efficiency by allowing light to reach the photodiodes without passing through metal interconnect layers. In a BSI structure, the wafer is flipped and thinned so that light enters from the back of the sensor, striking the photodiodes directly. This leads to higher sensitivity, especially in small-pixel sensors where front-side obstructions are proportionally more significant.
Another benefit of BSI geometry is that it allows for a thinner optical stack, including the microlenses and color filter array (CFA). Thinner color filters reduce the path length through which photons can scatter laterally, which in turn reduces color channel crosstalk. In FSI sensors, the CFA must sit atop routing layers, often requiring thicker filters to maintain spectral performance and physical separation, which increases the chance that a photon meant for one pixel will be absorbed by an adjacent one, especially after scattering.
The reduced crosstalk in BSI sensors improves color separation and reduces color contamination in shadows and fine detail. This is particularly helpful in high dynamic range scenes or when recovering shadow detail, where subtle color shifts can otherwise become noticeable. While BSI sensors do introduce new challenges, such as more complex manufacturing and the need for precise microlens alignment, their color performance is generally better than that of comparable FSI designs.
OTOH, you should read this:
https://blog.kasson.com/the-last-word/roles-of-camera-and-raw-developer-in-determining-color/