Notable New Top500 Supercomputers

The semi-annual Top500 list of the world’s fastest supercomputers was published over the weekend and to no surprise many of the new entries on the list were powered by Intel Xeon Phi co-processors or Nvidia GPUs. These days, there continues to be a lot of debate about the value of the Top500 list, with some of the largest supercomputer sites no longer submitting their systems to the list. Nevertheless the Top500 list can still be a useful tool, especially in comparing relative facts, figures, and trends of high performance computing. Lets take a look at the list, including some of the HP systems that give HP its leading 37% vendor systems share.

The most talked about system will no doubt be the new top ranked system, Tianhe-2 in China, debuting at almost twice the Top500 performance of the previous record holder, ORNL’s Titan system which now moves to number two. Component-wise, other than the handful of BlueGene system on the list, Tianhe-2 and Titan couldn’t be farther apart. There is a good compare of Tianhe-2 and Titan in this HPCWire article. Tianhe-2 uses Xeon processors, Xeon Phi co-processors, and a homegrown interconnect said to be similar to QDR InfiniBand. Titan uses AMD processors, Nvidia GPUs, and Cray’s previous generation custom interconnect technology, sold off last year to Intel who will likely roll out it’s own HPC fabric in the future. The leapfrogging and acceleration (Xeon Phi or Nvidia GPU) plays out as you march down the list.

USC debuts a new HP ProLiant SL250s supercomputer with Nvidia K20 GPUs and FDR Infiniband interconnect at number 53 on the list. Achieving 76.99% efficiency, the system uses only 237 KW of power or 2.24 TF/KW. This is one of the most efficient Nvidia powered supercomputers in the Top500. By comparison, USC’s previous supercomputer, including a mix of HP and other un-accelerated nodes is now ranked at 243 on the Top500 list at 149.9 TF. USC doesn’t list the power used by that cluster but I can say from first hand experience that it uses a large percentage of the available power in their data center when running Linpack, and certainly more than their new HP supercomputer.

Another HP system interesting for its efficiency is the 28th ranked HP ProLiant SL250s Conte supercomputer at Purdue. Delivering 978.6 TF with less than 580 HP ProLiant SL250s systems, each outfitted with two Xeon Phi 5110P co-processors this system delivers 70% efficiency which at 510 KW equates to 1.84 TF/KW. Do doubt cross-state rival Indiana University who recently claimed the fastest academic funded supercomputer with a competitor’s system that placed number 46 on the list must be asking some hard questions. Congratulations to Purdue on their new ranking as the fastest academic funded supercomputer.

Conte join’s Purdue’s now 176 ranked Carter supercomputer in leading the Top500 in use of Mellanox InfiniBand technology. Carter was one of the first large Top500 supercomputers to use Mellanox FDR Connect-X3 technology and Conte now becomes one of the first large Top500 supercomputers to use Mellanox FDR Connect-IB technology which no doubt contributed to the leading efficiency of this system.

Customers considering the purchase of either Nvidia GPU or Xeon Phi co-processor power systems should study up on the USC and Purdue systems, especially their power efficiency, and ask the vendors they are considering how they can provide similar results.


About Marc Hamilton

Marc Hamilton – Vice President, Solutions Architecture and Engineering, NVIDIA. At NVIDIA, the Visual Computing Company, Marc leads the worldwide Solutions Architecture and Engineering team, responsible for working with NVIDIA’s customers and partners to deliver the world’s best end to end solutions for professional visualization and design, high performance computing, and big data analytics. Prior to NVIDIA, Marc worked in the Hyperscale Business Unit within HP’s Enterprise Group where he led the HPC team for the Americas region. Marc spent 16 years at Sun Microsystems in HPC and other sales and marketing executive management roles. Marc also worked at TRW developing HPC applications for the US aerospace and defense industry. He has published a number of technical articles and is the author of the book, “Software Development, Building Reliable Systems”. Marc holds a BS degree in Math and Computer Science from UCLA, an MS degree in Electrical Engineering from USC, and is a graduate of the UCLA Executive Management program.
This entry was posted in Uncategorized. Bookmark the permalink.