the last word

Photography meets digital computer technology. Photography wins -- most of the time.

  • site home
  • blog home
  • galleries
  • contact
  • underwater
  • the bleeding edge
You are here: Home / Technical / The new compute: after the transition

The new compute: after the transition

March 18, 2009 JimK Leave a Comment

Now that it’s set up, I am happy with the new computer; and I’m even starting to believe it was worth all the trouble. Although air-cooled, it’s as quiet as any water-cooled machine I’ve owned. I’ve thrown a lot of photographic processing at it, and I’ve never seen memory usage above 60% (That’s still a lot of memory — almost 10 GB).

The specific problem that caused me to upgrade was that intensive simultaneous usage of Photoshop and Lightroom caused swapping and slow performance. That problem is completely solved. In addition, Lightroom itself runs much more crisply; it must be generating more previews in the background now that it has more memory.

The machine is fast. Having more processing power means faster network transfer rates between the workstation and the server; I’m now seeing sustained transfer rates of over 400 Mb/s over Gigabit Ethernet; that’s almost double the transfer rate of the old system. You’d think the network interface designers would put enough processing power in their hardware to essentially fill the pipe, but that’s never been the case in more than twenty years of local networking on PCs.

Recently-used programs load with alarming speed, as there is now a lot of space for disk caching. That only emphasizes the sluggish start up of programs that have to be loaded from the disk, and point to the fact that the disk system is now the main bottleneck. The ways to get faster disk performance for long reads are a) spin the platter faster, b) increase the data density, c) use more individually-actuated heads (which, these days, means more drives), and d) increase the speed of the disk interface.

Faster-spinning disks run hotter and are less reliable; both effects are highly non-linear. They also consume far more power. All in all, ramping up disk revs isn’t the way to go.

Increased data density is a clear winner; not only does it make for larger drives, it makes them faster. There is still room for big improvement here.

Striping data across two disks is a reasonable thing to do, and can double performance. Going beyond that to five or six disks gets to be pretty silly — reliability suffers, recovery is difficult, and most people don’t need all that much storage. I have used striping and mirroring (aka RAID 10) in the same array many years ago, but these days it just seems like a waste of power and space, and it’s noisy to boot.

I think the way to go is flash storage, but not configured as magnetic disk emulators; using an interface designed for spinning media is an anachronism for solid-state memory. Rather there should be both hardware and OS changes to support a non-volatile flash disk cache on the motherboard, interfaced to the dynamic memory subsystem with a fast, wide bus. Let’s imagine a machine with a 1 TB OS disk and 100 MB of flash cache. Over a week or so of usage, an intelligently designed cache control program would have all the OS that you need to boot up and all the commonly-used aps in the cache. The machine could boot and load the most important aps without touching the hard disk. Writes could be cached as well because of the nonvolatility of the flash memory. Background flushing could keep the spinning disk up to date for backups. I haven’t seen this type of system proposed, but I’m not paying much attention to the computer design literature these days.

Technical, The Bleeding Edge

← Setting up the new computer Using only what you need →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

March 2023
S M T W T F S
 1234
567891011
12131415161718
19202122232425
262728293031  
« Jan    

Articles

  • About
    • Patents and papers about color
    • Who am I?
  • Good 35-70 MF lens
  • How to…
    • Backing up photographic images
    • How to change email providers
  • Lens screening testing
    • Equipment and Software
    • Examples
      • Bad and OK 200-600 at 600
      • Excellent 180-400 zoom
      • Fair 14-30mm zoom
      • Good 100-200 mm MF zoom
      • Good 100-400 zoom
      • Good 100mm lens on P1 P45+
      • Good 120mm MF lens
      • Good 18mm FF lens
      • Good 24-105 mm FF lens
      • Good 24-70 FF zoom
      • Good 35 mm FF lens
      • Good 60 mm lens on IQ3-100
      • Good 63 mm MF lens
      • Good 65 mm FF lens
      • Good 85 mm FF lens
      • Good and bad 25mm FF lenses
      • Good zoom at 24 mm
      • Marginal 18mm lens
      • Marginal 35mm FF lens
      • Mildly problematic 55 mm FF lens
      • OK 16-35mm zoom
      • OK 60mm lens on P1 P45+
      • OK Sony 600mm f/4
      • Pretty good 16-35 FF zoom
      • Pretty good 90mm FF lens
      • Problematic 400 mm FF lens
      • Tilted 20 mm f/1.8 FF lens
      • Tilted 30 mm MF lens
      • Tilted 50 mm FF lens
      • Two 15mm FF lenses
    • Found a problem – now what?
    • Goals for this test
    • Minimum target distances
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Printable Siemens Star targets
    • Target size on sensor
      • MFT
      • APS-C
      • Full frame
      • Small medium format
    • Test instructions — postproduction
    • Test instructions — reading the images
    • Test instructions – capture
    • Theory of the test
    • What’s wrong with conventional lens screening?
  • Previsualization heresy
  • Privacy Policy
  • Recommended photographic web sites
  • Using in-camera histograms for ETTR
    • Acknowledgments
    • Why ETTR?
    • Normal in-camera histograms
    • Image processing for in-camera histograms
    • Making the in-camera histogram closely represent the raw histogram
    • Shortcuts to UniWB
    • Preparing for monitor-based UniWB
    • A one-step UniWB procedure
    • The math behind the one-step method
    • Iteration using Newton’s Method

Category List

Recent Comments

  • JimK on Sony 135 STF on GFX-50R, sharpness
  • K on Sony 135 STF on GFX-50R, sharpness
  • Mal Paso on Christmas tree light bokeh with the XCD 38V on the X2D
  • Sebastian on More on tilted adapters
  • JimK on On microlens size in the GFX 100 and GFX 50R/S
  • Kyle Krug on On microlens size in the GFX 100 and GFX 50R/S
  • JimK on Hasselblad X2D electronic shutter scan time
  • Jake on Hasselblad X2D electronic shutter scan time
  • Piotr Chylarecki on Who am I?
  • JimK on Who am I?

Archives

Copyright © 2023 · Daily Dish Pro On Genesis Framework · WordPress · Log in

Unless otherwise noted, all images copyright Jim Kasson.