This week, several institutions submitted proposals to the US National Science Foundation (NSF) for the $30M NSF😄 program based on HP supercomputer designs. According to the NSF web site, the NSF requests proposals from organizations willing to serve as HPC Resource Providers within Extreme Digital (XD), the successor to TeraGrid who propose to acquire and deploy new, innovative petascale HPC systems and services that will:
Participating in these large, forward looking procurements is important to HP for many reasons. For starters, putting down on paper how we will build a supercomputer that goes into production in 2013 forces HP to look ahead and bring together all aspects of our Converged Infrastructure for HPC approach, examining not only the servers, storage, and networking, but software, power and cooling, and services required. Even a few short years ago, building such large supercomputers required specialized proprietary systems that didn’t scale down to entry level or even mid-size HPC systems. Today, however, with HP’s approach to building supercomputers out of industry standard servers, networking, and storage, we can leverage the same technology that we used to build TSUBAME 2.0, the world’s 4th largest supercomputer, down to a single HP ProLiant SL6500 system delivering over 4 TFlops of compute power in only 4 rack units of space.
Perhaps even more importantly than forcing HP to crystalize our own designs for 2013, participating in these large procurements gives HP valuable insight into what some of the best HPC practitioners in the world think is needed to build a competitive system. As the NSF is just starting the evaluation stage of this solicitation, I’m not ready to share which HPC centers partnered with HP, or to say too much about our designs, but I will elaborate on a few trends that came up again and again as I completed a final review of all the proposals.
Each team definitely took to heart the “data intensive” nature of the solicitation called out in the summary description, and not just by bidding massive amounts of storage in one of today’s popular HPC parallel file systems. Several teams took cues from the web 2.0 space, ranging from configuring disk-heavy compute nodes perfect for running things like Hadoop to bringing in leading web 2.0 companies to partner with them on the solicitation.
As one might expect, each team partnering with HP based their proposed solution on industry standard processors and/or industry standard co-processors. With our partners at least, there was no across-the-board bet on a single processor/co-processor vendor, with the three vendors that you might expect well represented across the set of proposals.
Each proposal also spent time highlighting their data center efficiency. A number of different data center designs were taken by the different teams, but certainly common across all of them was HP’s HP Thermal Logic technology like Dynamic Power Capping, as well as other HP data center innovations.
I’m sure everyone in the HPC industry will watch closely as the😄 acquisition plays out over the coming months, to see which HPC centers, HPC vendors, and HPC technologies are selected by the NSF. No mater who is ultimately selected to delivery this system, the exercise of bidding, in itself, was valuable to HP. It deepened our relationship with a number of the leading HPC centers in the US and it advanced our own thinking on what we need to deliver in 2013, not just on $30M supercomputers but on entry level $3000 HPC systems. I want to thank everyone on the HP team, and there were countless HP employees who supported the proposal efforts, and also all of our partners, for all the hard work put in over the last weeks and months to bring together multiple high quality proposals.