NREL Selects HP For New HPC System

Some of us are in the HPC business because it matters. Matters in big things like how we understand our global climate or how we harness renewable energy. In the US, many of the smartest folks who work on renewable energy research are at the DOE’s National Renewable Energy Lab (NREL). Key to NREL’s research are powerful supercomputers like the Red Mesa system. Although as my teenage son commented several years ago, it is ironic that supercomputers used to study global climate change and renewable energy themselves use so much energy. And that is certainly top of mind for folks like NREL’s Computational Science Center Director Steve Hammond. So while Red Mesa was very state of the art in energy efficiency when it was deployed in 2009, Steve and his team have been busy working on an even more efficient supercomputer, co-designed with a new data center that will showcase a host of world class energy efficient technologies.

Here is NREL’s press release on their new system as well as a press release from Intel.

The checkerboard floor pattern in my last post is actually a bit deceiving. If you look at the closeup below,

you notice that every other floor tile is actually stacked up on its neighbor exposing the computer room sub floor below. Many traditional computer rooms use the sub-floor as a plenum for cold air, but not the case at NREL’s new state-of-the-art Energy Systems Integration Facility (ESIF). Sitting on the third floor of the new ESIF building, only a thick wire mesh separates the floor tiles from a cavernous second story housing the ESIF power and cooling infrastructure.

Two years ago, when HP installed their first PetaFLOPS supercomputer, the TSUBAME2.0 system, Tokyo Tech referred to it as a tiny PetaFLOPS system, requiring just over 50 racks of computers. With a combination of improved Intel Xeon processors, new Intel Xeon Phi co-processors, and new HP water cooled rack technology, NREL’s supercomputer will cross the PetaFLOPS boundary next summer with only a fraction of the racks, leaving plenty of space in the ESIF data center for future expansion.

Relying on water cooling within the rack vs chilled air for most of its cooling, the NREL facility is anticipated to reach a Power Usage Effectiveness (PUE) of 1.06, meaning 94% of the energy used is delivered directly to the supercomputer, vs as little as 50% in the average data center. That requires massive water pipes, pictured below during a recent construction site tour (notice the hardhats). Above the piping the underside of the wire mesh and floor tiles is visible.

The physics motivating the move to water cooling vs air cooling is fairly simple, liquid moves energy more efficiently than air. Consider the example of a typical 7 kW rack of servers which generates about 24,000 BTU per hour. Removing 24,000 BTU of heat can either be done with 0.5 bhp fan blowing 1000 CFM of air through a 20″ duct or a 0.05 bhp pump feeding 4 gpm through 1″ tubing.

NREL’s ESIF data center delivers a lot more than just an industry leading PUE. Defined as (Total Facility Power / IT Equipment Power), PUE is only part of the energy savings equation. HP, working with NREL, wants to move towards Net Zero Energy (NZE) data center design. As such, HP designed our new water cooled racks, in part inspired by the ESIF data center design, to capture “waste heat”, and instead of using additional energy to cool that heat as many data centers do, the ESIF facility will use the waste heat as the primary heat source for the ESIF offices and lab space. As the PetaFLOPS system expands further, the waste heat can also be exported to adjacent buildings across NREL’s campus.

NREL’s new PetaFLOPS system will include existing HP ProLiant SL servers as well as new products and technology planned for announcement in the coming year. I expect I will get a lot of comments asking for additional detail, so I’ll apologize in advance that unless you work for NREL or a select few other pre-launch customers, you will have to wait to hear the details. If you are really curious, you can come to our HP-CAST user group meeting just prior to SC12 or sign up for an HP NDA session at SC12 where we will privately share additional details on our future HPC products.


About Marc Hamilton

Marc Hamilton – Vice President, Solutions Architecture and Engineering, NVIDIA. At NVIDIA, the Visual Computing Company, Marc leads the worldwide Solutions Architecture and Engineering team, responsible for working with NVIDIA’s customers and partners to deliver the world’s best end to end solutions for professional visualization and design, high performance computing, and big data analytics. Prior to NVIDIA, Marc worked in the Hyperscale Business Unit within HP’s Enterprise Group where he led the HPC team for the Americas region. Marc spent 16 years at Sun Microsystems in HPC and other sales and marketing executive management roles. Marc also worked at TRW developing HPC applications for the US aerospace and defense industry. He has published a number of technical articles and is the author of the book, “Software Development, Building Reliable Systems”. Marc holds a BS degree in Math and Computer Science from UCLA, an MS degree in Electrical Engineering from USC, and is a graduate of the UCLA Executive Management program.
This entry was posted in Uncategorized. Bookmark the permalink.

One Response to NREL Selects HP For New HPC System

  1. Pingback: NREL Selects HP for Water-Cooled Petaflop Super with Xeon Phi |

Comments are closed.