This month HP started delivery of a very large supercomputer consisting of over 2000 of our next generation SL6500 servers. The customer just happens to be a very large oil company and so when discussing the system with my teenage son he of course had to point out how ironic it was that the system would not only generate carbon emissions using energy to power the supercomputer but ultimately the advanced seismic processing the system enabled would result in more oil drilling and ultimately in more carbon emissions. Ah, teenagers.
Now, my son drives a Prius, so I asked him if he wanted to stop driving his car so he wouldn’t burn any oil. But dad, he exhorted, my Prius burns less gas than nearly any other car. Aha, you are getting the point, I said. Our next generation SL6500 uses less energy than many competing systems. For instance, the Purdue system using the same next generation SL6500 ranks 38th on the Green500 list and earlier SL6500 systems at Tokyo Tech and Georgia Tech, each over a year old, still rank 10 and 12 on the latest list. The Purdue system is among the highest ranking general purpose x86 supercomputers that don’t use some sort of accelerator technology like the Nvidia GPUs used in the Tokyo Tech and Georgia Tech systems. While we would love to sell GPUs with every server, and GPU usage in supercomputers is quickly growing, not all applications can yet take advantage of GPUs and thus some customers like Purdue and the oil company system mentioned above look for other areas to gain their energy efficiency.
The original design of the new oil company system used 16 8 GB memory DIMMs. As you might expect with a system called “next generation SL6500”, the system uses a new high speed 1600 MHz DIMM. At the customer’s request, HP worked closely with our suppliers to source larger 16 GB DIMMs instead of the more traditional 8 GB DIMMs, while keeping the cost premium for the larger DIMMs to a minimum. A memory DIMM uses about 5 watts of power, so while that doesn’t seem like a lot, it adds up quickly when you are saving 8 DIMMs * 2000 servers * 5 watts. About 700,000 KW a year, or $210,000 over the 3 year life of the system assuming an energy cost of 10 cents/KW.
Everyone in my son’s high school could drive a Prius to school and pay for the gas with that much power savings!
Postscript. Purdue will be adding Nvidia M2090 GPUs to some of the next generation SL6500 compute nodes in their cluster this month. It will be interesting to see how that changes their standings in the next Green500 list.