Now that it’s set up, I am happy with the new computer; and I’m even starting to believe it was worth all the trouble. Although air-cooled, it’s as quiet as any water-cooled machine I’ve owned. I’ve thrown a lot of photographic processing at it, and I’ve never seen memory usage above 60% (That’s still a lot of memory — almost 10 GB).
The specific problem that caused me to upgrade was that intensive simultaneous usage of Photoshop and Lightroom caused swapping and slow performance. That problem is completely solved. In addition, Lightroom itself runs much more crisply; it must be generating more previews in the background now that it has more memory.
The machine is fast. Having more processing power means faster network transfer rates between the workstation and the server; I’m now seeing sustained transfer rates of over 400 Mb/s over Gigabit Ethernet; that’s almost double the transfer rate of the old system. You’d think the network interface designers would put enough processing power in their hardware to essentially fill the pipe, but that’s never been the case in more than twenty years of local networking on PCs.
Recently-used programs load with alarming speed, as there is now a lot of space for disk caching. That only emphasizes the sluggish start up of programs that have to be loaded from the disk, and point to the fact that the disk system is now the main bottleneck. The ways to get faster disk performance for long reads are a) spin the platter faster, b) increase the data density, c) use more individually-actuated heads (which, these days, means more drives), and d) increase the speed of the disk interface.
Faster-spinning disks run hotter and are less reliable; both effects are highly non-linear. They also consume far more power. All in all, ramping up disk revs isn’t the way to go.
Increased data density is a clear winner; not only does it make for larger drives, it makes them faster. There is still room for big improvement here.
Striping data across two disks is a reasonable thing to do, and can double performance. Going beyond that to five or six disks gets to be pretty silly — reliability suffers, recovery is difficult, and most people don’t need all that much storage. I have used striping and mirroring (aka RAID 10) in the same array many years ago, but these days it just seems like a waste of power and space, and it’s noisy to boot.
I think the way to go is flash storage, but not configured as magnetic disk emulators; using an interface designed for spinning media is an anachronism for solid-state memory. Rather there should be both hardware and OS changes to support a non-volatile flash disk cache on the motherboard, interfaced to the dynamic memory subsystem with a fast, wide bus. Let’s imagine a machine with a 1 TB OS disk and 100 MB of flash cache. Over a week or so of usage, an intelligently designed cache control program would have all the OS that you need to boot up and all the commonly-used aps in the cache. The machine could boot and load the most important aps without touching the hard disk. Writes could be cached as well because of the nonvolatility of the flash memory. Background flushing could keep the spinning disk up to date for backups. I haven’t seen this type of system proposed, but I’m not paying much attention to the computer design literature these days.
Leave a Reply