HP ProLiant Performance Boost with IvyBridge

A quick trip to shopping.hp.com confirms that you can now purchase your favorite HP ProLiant server with the new Intel IvyBridge CPUs announced yesterday. The short summary on performance is: yes, IvyBridge gives you a good performance boost as well as better price/performance and performance/watt. All the new IvyBridge server processors can be differentiated by the “v2” tagged on to the product name, i.e. “Intel E5-2670 v2” is IvyBridge while “E5-2670” is the older SandyBridge chip.

You will need a whole new decoder ring, however, to compare IvyBridge and SandyBridge processors. Don’t just assume that the “v2” chips have the same core count, max TDP (power), or price as the same numbered SandyBridge chip. Intel has a complete listing of the new IvyBridge parts. If someone has put together a chart mapping SandyBridge processors to the closest “v2” IVB part I’d love to see it. While the subtle differences in product features make it difficult to do an exact SandyBridge to IvyBridge comparison, on a broad range of real world application benchmarks run by HP, we have measured twenty percent or better performance on IvyBridge processors compared to similar cost SandyBridge processors for a broad range of benchmarks. Of course, your mileage may vary. The best performance boost I have seen has been 38% on a 16-server Linpack run and the worst I have seen was 8% on a CFD application (I won’t mentioned the particular code as it would be unfair to single out the CFD vendor on a single benchmark).

Here are a couple of quick things to consider when upgrading to IvyBridge. The new IvyBridge CPUs have up to 12 cores, compared to 8 cores max for SandyBridge. However, not all users will automatically want to move from 8 to 12 cores when upgrading and I expect the 10 core IvyBridge parts will be quite popular for HPC workloads. One reason is that both IvyBridge and SandyBridge, in their E5-26xx flavors, are limited to 4 memory channels per processor. So while the fastest IvyBridge memory is now 1866 MHz, up from 1600 MHz with SandyBridge, that represents only a 16.6% memory bandwidth increase compared to a 25-50% core count increase. Memory hungry applications beware.

Speaking of memory, you also need to be careful when configuring multiple DIMMs per channel (DPC). Intel’s reference architecture only requires vendors to support the fastest 1866 MHz memory on 1 DPC. HP’s Smart Memory supports 2 DPC at 1866 MHz on most ProLiant models. Apart from speed, many customers are used to specifying memory capacity as GB per core. As a rule of thumb, for highest performance you always want to configure all memory channels with the same number of DPC. But especially with the 10 core IvyBridge chips and 4 memory channels, you are probably going to have to adjust your GB per core numbers to ensure all your memory channels have the same number of DIMMs.

InfiniBand vendor Mellanox should be happy with the IvyBridge launch as the new processors make a good argument for users still on QDR InfiniBand to upgrade to FDR parts that have now been shipping for nearly two years (many of you will remember the Purdue Carter cluster was one of the first large FDR systems on the Top500 list in November of 2011.

While other benchmarks have compared Xeon vs Xeon Phi and Nvidia K20x performance, the faster memory available with IvyBridge will benefit Nvidia GPU users as well. If you have been waiting to purchase your next Nvidia GPU system, the IvyBridge launch gives you a great reason to move ahead.

And in the spirit of equal time for all our processor partners, AMD fans will be happy to note that at least some press reports have HP executives talking about upcoming AMD parts for HP’s new energy efficient Moonshot platform.

About Marc Hamilton

Marc Hamilton – Vice President, Solutions Architecture and Engineering, NVIDIA. At NVIDIA, the Visual Computing Company, Marc leads the worldwide Solutions Architecture and Engineering team, responsible for working with NVIDIA’s customers and partners to deliver the world’s best end to end solutions for professional visualization and design, high performance computing, and big data analytics. Prior to NVIDIA, Marc worked in the Hyperscale Business Unit within HP’s Enterprise Group where he led the HPC team for the Americas region. Marc spent 16 years at Sun Microsystems in HPC and other sales and marketing executive management roles. Marc also worked at TRW developing HPC applications for the US aerospace and defense industry. He has published a number of technical articles and is the author of the book, “Software Development, Building Reliable Systems”. Marc holds a BS degree in Math and Computer Science from UCLA, an MS degree in Electrical Engineering from USC, and is a graduate of the UCLA Executive Management program.
This entry was posted in Uncategorized. Bookmark the permalink.