Photographers often notice that objects near the edges of wide-angle images look stretched or distorted. A round object can look like an ellipse, and people standing near the sides of the frame can appear unnaturally wide. This effect is usually blamed on the lens, as if wide-angle optics inherently distort the world.
What’s actually happening is more interesting.
Rectilinear lenses don’t distort shapes on a flat frontoparallel (parallel to the camera’s sensor plane and perpendicular to the optical axis) plane
A rectilinear lens (the standard perspective projection used by most photographic lenses) has one key geometric property:
Straight lines in the world project to straight lines in the image.
The projection maps a 3D world point (X, Y, Z) to image coordinates
x = f * X / Z
y = f * Y / Z
If the scene consists of a flat plane perpendicular to the optical axis, then every point on that plane has the same depth Z = Z0. The projection becomes a uniform scaling of the coordinates:
x = (f / Z0) * X
y = (f / Z0) * Y
A uniform scale preserves shapes. A circle stays a circle. A square stays a square. Angles stay angles. Even if the field of view is extremely wide, an ideal rectilinear lens will not turn circles into ellipses on a frontoparallel plane.
Where does the wide-angle look come from? The stretching comes from the scene, not the lens. Wide-angle lenses capture a larger portion of the world, and the parts near the edges of the frame are usually not on a frontoparallel plane. In everyday scenes, surfaces toward the edges are viewed from a more oblique angle, and from a different distance than surfaces at the center.
These two effects — tilt and distance variation — are what produce the characteristic stretching.
- Surfaces at the edges are typically tilted relative to the camera. If you look straight at a circle on a wall, you see a circle. If you look at the same circle from the side, it becomes an ellipse. The same thing happens in wide-angle photos. The lens is still facing straight ahead, but the scene near the corners is not. A person standing near the edge of the frame is turned partly sideways relative to the camera’s viewpoint. A circle painted on a floor or ceiling is seen from an angle. Those tilted surfaces naturally produce elliptical projections.
- Objects near the edges are often physically closer to the camera. Perspective magnification is proportional to 1 / Z. If an object is closer to the camera, it is magnified more strongly. Wide-angle lenses often place you physically close to your subjects, so the parts of the scene at the edges of the frame are not only tilted — they are also closer than the central region. That combination makes them appear stretched.
The stretching is the world, faithfully projected. A rectilinear wide-angle lens is not distorting the world. It is revealing a wider slice of it, including surfaces that face the camera at oblique angles, are at different distances, and occupy a larger angular extent in the field of view. The edges look stretched because the scene itself looks stretched from that vantage point. The lens is simply obeying the geometry of perspective.
The wide-angle look is not a flaw of the lens. It is a geometric consequence of perspective when you capture a large field of view from a fixed position. Objects at the edges of the frame are usually closer to the camera and oriented at an angle, and an ideal rectilinear projection faithfully records those facts. That is why real scenes photographed with wide-angle lenses show stretching at the edges, even though a perfectly frontoparallel test chart would not.
I wrote a lens simulator to illustrate this effect with a spherical target with the camera in the center of the sphere, and black circles on the target. Here are results with a 32mmx24mm sensor, and a 33x44mm one, with equivalent focal lengths.




Tim Wilson says
If one moves close enough to the print that the viewing angle-of-view matches the very wide capture angle-of-view, doesn’t all the apparent stretching go away?
JimK says
It does indeed.
Craig Stocks says
Viewing distance can also play into it. if you view a print from the “ideal” viewing distance then the appearance of wide-angle distortion disappears. The ideal viewing distance becomes very close for wide-angle lenses, but if you can predict the viewing distance you can match the lens perspective and print size to create an immersive experience. The reason a 50mm lens is considered normal is because the ideal viewing distance for a 5×7 or 8×10 print is about arms length; the image looks normal when a typical print is viewed at a typical distance.
JimK says
That’s true.
Pieter Kers says
A rectilinear lens sees things more or less as our eyes & brain sees the world.
But is our brain not fouling us?
one
We should see things upside down due to the projection inside the eye.
two
I am straight in front of a building;
I stretcht out my arm with in my hand a ruler and I measure the height of centre of the building. height is =A
Now I change the direction I look to the side of the building and measure again. height is =B
of course B is much smaller than A
but my eyes sees a building – like a rectangular lens with the sides as heigh as in the centre.
is an other projection not more true to the reality? cylindrical?
JimK says
The eye delivers only small, precise samples of the world to the brain, because only the fovea has the resolution we associate with clear sight, yet is limited in field to one or two degrees. Yet we experience a wide, stable, richly detailed scene. The mind achieves this by combining a constant stream of foveal glimpses with memory, attention, and a great deal of prediction.
Every second, the eyes make several rapid saccades. Each one is a jump that places the fovea on a new region of interest. During the brief pause that follows, the visual system extracts the information it needs. It does not store a little photograph. It extracts edges, orientations, color statistics, local textures, and hints about what objects might be present. While the eyes seem still during each fixation, they are actually drifting slightly and making tiny microsaccades. These miniature movements prevent the image from fading and give the brain several slightly shifted looks at the same patch. The shifts act a little like a dither pattern in digital imaging, letting the system gather more information than a single static view would provide.
The brain does not stitch these samples together into a panoramic bitmap. Instead it builds and updates an internal model of the world. The model relies heavily on prior knowledge. The mind expects objects to maintain their shape, to persist even when partially occluded, and to obey the geometry of ordinary rooms, streets, trees, and faces. Incoming foveal evidence is compared to these expectations, and mismatches cause the model to update. Much of what we feel we are seeing at any moment is actually prediction, not current data. The predictions are constrained by incoming samples, but they fill in large gaps.
Right before and during each saccade, neurons in parts of the brain shift their receptive fields to anticipate where objects will land on the retina after the eye moves. This predictive remapping allows the world to feel stable even though the retinal image jumps several times each second. The brain stitches the sequence of fixations together in time rather than in space. It does so by assigning each glimpse a place in a stable three dimensional scene that is remembered across hundreds of milliseconds. Older information fades unless the scene seems static and reliable.
Only the parts of the scene that fall under attention receive detailed representation. Peripheral vision supplies color, motion, and layout cues, but not shape with fine contours. The brain extrapolates the rest. It feels as if the entire field of view is sharp, but this is an illusion supported by memory and prediction. As soon as the fovea lands somewhere else, detailed information from that spot replaces the earlier detail, and the brain acts as if the richness had been there all along.
Short term visual memory carries the load of keeping the scene coherent. It holds fragments from the last several fixations and merges them with long term knowledge about object identity, lighting, geometry, and typical environments. What emerges in consciousness is not a literal picture. It is a stable, continuously updated interpretation of the world, informed by sparse but high quality samples from the fovea and by a model that expects the world to behave in consistent ways.
The result is a kind of ongoing inference. The mind constructs the world from limited data, fills in the gaps with expectations, corrects the model when new evidence demands it, and suppresses awareness of the constant disruptions caused by eye movements. Our visual experience is therefore not something we passively receive. It is something we actively assemble from moment to moment, guided by brief, precise samples and a lifetime of practice interpreting them.