• site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge

the last word

Photography meets digital computer technology. Photography wins -- most of the time.

You are here: Home / The Last Word / Why wide-angle lenses stretch the edges of the frame

Why wide-angle lenses stretch the edges of the frame

December 2, 2025 JimK 6 Comments

Photographers often notice that objects near the edges of wide-angle images look stretched or distorted. A round object can look like an ellipse, and people standing near the sides of the frame can appear unnaturally wide. This effect is usually blamed on the lens, as if wide-angle optics inherently distort the world.

What’s actually happening is more interesting.

Rectilinear lenses don’t distort shapes on a flat frontoparallel (parallel to the camera’s sensor plane and perpendicular to the optical axis) plane

A rectilinear lens (the standard perspective projection used by most photographic lenses) has one key geometric property:

Straight lines in the world project to straight lines in the image.

The projection maps a 3D world point (X, Y, Z) to image coordinates

x = f * X / Z

y = f * Y / Z

If the scene consists of a flat plane perpendicular to the optical axis, then every point on that plane has the same depth Z = Z0. The projection becomes a uniform scaling of the coordinates:

x = (f / Z0) * X

y = (f / Z0) * Y

A uniform scale preserves shapes. A circle stays a circle. A square stays a square. Angles stay angles. Even if the field of view is extremely wide, an ideal rectilinear lens will not turn circles into ellipses on a frontoparallel plane.

Where does the wide-angle look come from? The stretching comes from the scene, not the lens. Wide-angle lenses capture a larger portion of the world, and the parts near the edges of the frame are usually not on a frontoparallel plane. In everyday scenes, surfaces toward the edges are viewed from a more oblique angle, and from a different distance than surfaces at the center.

These two effects — tilt and distance variation — are what produce the characteristic stretching.

  1. Surfaces at the edges are typically tilted relative to the camera. If you look straight at a circle on a wall, you see a circle. If you look at the same circle from the side, it becomes an ellipse. The same thing happens in wide-angle photos. The lens is still facing straight ahead, but the scene near the corners is not. A person standing near the edge of the frame is turned partly sideways relative to the camera’s viewpoint. A circle painted on a floor or ceiling is seen from an angle. Those tilted surfaces naturally produce elliptical projections.
  2. Objects near the edges are often physically closer to the camera. Perspective magnification is proportional to 1 / Z. If an object is closer to the camera, it is magnified more strongly. Wide-angle lenses often place you physically close to your subjects, so the parts of the scene at the edges of the frame are not only tilted — they are also closer than the central region. That combination makes them appear stretched.

The stretching is the world, faithfully projected. A rectilinear wide-angle lens is not distorting the world. It is revealing a wider slice of it, including surfaces that face the camera at oblique angles, are at different distances, and occupy a larger angular extent in the field of view. The edges look stretched because the scene itself looks stretched from that vantage point. The lens is simply obeying the geometry of perspective.

The wide-angle look is not a flaw of the lens. It is a geometric consequence of perspective when you capture a large field of view from a fixed position. Objects at the edges of the frame are usually closer to the camera and oriented at an angle, and an ideal rectilinear projection faithfully records those facts. That is why real scenes photographed with wide-angle lenses show stretching at the edges, even though a perfectly frontoparallel test chart would not.

I wrote a lens simulator to illustrate this effect with a spherical target with the camera in the center of the sphere, and black circles on the target. Here are results with a 32mmx24mm sensor, and a 33x44mm one, with equivalent focal lengths.

 

 

 

 

The Last Word

← Sharpness and aliasing, one more time

Comments

  1. Tim Wilson says

    December 3, 2025 at 5:37 am

    If one moves close enough to the print that the viewing angle-of-view matches the very wide capture angle-of-view, doesn’t all the apparent stretching go away?

    Reply
    • JimK says

      December 3, 2025 at 9:51 am

      It does indeed.

      Reply
  2. Craig Stocks says

    December 3, 2025 at 8:11 am

    Viewing distance can also play into it. if you view a print from the “ideal” viewing distance then the appearance of wide-angle distortion disappears. The ideal viewing distance becomes very close for wide-angle lenses, but if you can predict the viewing distance you can match the lens perspective and print size to create an immersive experience. The reason a 50mm lens is considered normal is because the ideal viewing distance for a 5×7 or 8×10 print is about arms length; the image looks normal when a typical print is viewed at a typical distance.

    Reply
    • JimK says

      December 3, 2025 at 9:51 am

      That’s true.

      Reply
  3. Pieter Kers says

    December 9, 2025 at 3:14 am

    A rectilinear lens sees things more or less as our eyes & brain sees the world.
    But is our brain not fouling us?

    one
    We should see things upside down due to the projection inside the eye.

    two
    I am straight in front of a building;
    I stretcht out my arm with in my hand a ruler and I measure the height of centre of the building. height is =A
    Now I change the direction I look to the side of the building and measure again. height is =B
    of course B is much smaller than A

    but my eyes sees a building – like a rectangular lens with the sides as heigh as in the centre.
    is an other projection not more true to the reality? cylindrical?

    Reply
    • JimK says

      December 9, 2025 at 11:24 am

      The eye delivers only small, precise samples of the world to the brain, because only the fovea has the resolution we associate with clear sight, yet is limited in field to one or two degrees. Yet we experience a wide, stable, richly detailed scene. The mind achieves this by combining a constant stream of foveal glimpses with memory, attention, and a great deal of prediction.

      Every second, the eyes make several rapid saccades. Each one is a jump that places the fovea on a new region of interest. During the brief pause that follows, the visual system extracts the information it needs. It does not store a little photograph. It extracts edges, orientations, color statistics, local textures, and hints about what objects might be present. While the eyes seem still during each fixation, they are actually drifting slightly and making tiny microsaccades. These miniature movements prevent the image from fading and give the brain several slightly shifted looks at the same patch. The shifts act a little like a dither pattern in digital imaging, letting the system gather more information than a single static view would provide.

      The brain does not stitch these samples together into a panoramic bitmap. Instead it builds and updates an internal model of the world. The model relies heavily on prior knowledge. The mind expects objects to maintain their shape, to persist even when partially occluded, and to obey the geometry of ordinary rooms, streets, trees, and faces. Incoming foveal evidence is compared to these expectations, and mismatches cause the model to update. Much of what we feel we are seeing at any moment is actually prediction, not current data. The predictions are constrained by incoming samples, but they fill in large gaps.

      Right before and during each saccade, neurons in parts of the brain shift their receptive fields to anticipate where objects will land on the retina after the eye moves. This predictive remapping allows the world to feel stable even though the retinal image jumps several times each second. The brain stitches the sequence of fixations together in time rather than in space. It does so by assigning each glimpse a place in a stable three dimensional scene that is remembered across hundreds of milliseconds. Older information fades unless the scene seems static and reliable.

      Only the parts of the scene that fall under attention receive detailed representation. Peripheral vision supplies color, motion, and layout cues, but not shape with fine contours. The brain extrapolates the rest. It feels as if the entire field of view is sharp, but this is an illusion supported by memory and prediction. As soon as the fovea lands somewhere else, detailed information from that spot replaces the earlier detail, and the brain acts as if the richness had been there all along.

      Short term visual memory carries the load of keeping the scene coherent. It holds fragments from the last several fixations and merges them with long term knowledge about object identity, lighting, geometry, and typical environments. What emerges in consciousness is not a literal picture. It is a stable, continuously updated interpretation of the world, informed by sparse but high quality samples from the fovea and by a model that expects the world to behave in consistent ways.

      The result is a kind of ongoing inference. The mind constructs the world from limited data, fills in the gaps with expectations, corrects the model when new evidence demands it, and suppresses awareness of the constant disruptions caused by eye movements. Our visual experience is therefore not something we passively receive. It is something we actively assemble from moment to moment, guided by brief, precise samples and a lifetime of practice interpreting them.

      Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

December 2025
S M T W T F S
 123456
78910111213
14151617181920
21222324252627
28293031  
« Nov    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • How to…
    • Backing up photographic images
    • How to change email providers
    • How to shoot slanted edge images for me
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 35-70 MF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • Thomas on GFX 100 II pixel shift
  • JimK on Why wide-angle lenses stretch the edges of the frame
  • Pieter Kers on Why wide-angle lenses stretch the edges of the frame
  • Stefan Feaux de Lacroix on Fujifilm GFX 100RF inclusive review
  • Lou Jost on Leica 280/4 Apo-Telyt R on GFX 50R in infrared
  • JimK on Why wide-angle lenses stretch the edges of the frame
  • JimK on Why wide-angle lenses stretch the edges of the frame
  • Craig Stocks on Why wide-angle lenses stretch the edges of the frame
  • Tim Wilson on Why wide-angle lenses stretch the edges of the frame
  • Erik Kaffehr on Sharpness and aliasing, one more time

Archives

Copyright © 2025 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.