HP and the NSF Extreme Digital (XD) Program

This week, several institutions submitted proposals to the US National Science Foundation (NSF) for the $30M NSF XD program based on HP supercomputer designs. According to the NSF web site, the NSF requests proposals from organizations willing to serve as HPC Resource Providers within Extreme Digital (XD), the successor to TeraGrid who propose to acquire and deploy new, innovative petascale HPC systems and services that will:

  • Expand the range of data intensive computationally-challenging science and engineering applications that can be tackled with XD HPC services;
  • Introduce a major new innovative capability component to science and engineering research communities:
  • Provide an effective migration path to researchers scaling data and code beyond the campus level;
  • Incorporate reliable, robust system software and services essential to optimal sustained performance;
  • Efficiently provide a high degree of stability and usability by January, 2013; and
  • Complement and leverage existing XD capabilities and services.

    Participating in these large, forward looking procurements is important to HP for many reasons. For starters, putting down on paper how we will build a supercomputer that goes into production in 2013 forces HP to look ahead and bring together all aspects of our Converged Infrastructure for HPC approach, examining not only the servers, storage, and networking, but software, power and cooling, and services required. Even a few short years ago, building such large supercomputers required specialized proprietary systems that didn’t scale down to entry level or even mid-size HPC systems. Today, however, with HP’s approach to building supercomputers out of industry standard servers, networking, and storage, we can leverage the same technology that we used to build TSUBAME 2.0, the world’s 4th largest supercomputer, down to a single HP ProLiant SL6500 system delivering over 4 TFlops of compute power in only 4 rack units of space.

    Perhaps even more importantly than forcing HP to crystalize our own designs for 2013, participating in these large procurements gives HP valuable insight into what some of the best HPC practitioners in the world think is needed to build a competitive system. As the NSF is just starting the evaluation stage of this solicitation, I’m not ready to share which HPC centers partnered with HP, or to say too much about our designs, but I will elaborate on a few trends that came up again and again as I completed a final review of all the proposals.

    Each team definitely took to heart the “data intensive” nature of the solicitation called out in the summary description, and not just by bidding massive amounts of storage in one of today’s popular HPC parallel file systems. Several teams took cues from the web 2.0 space, ranging from configuring disk-heavy compute nodes perfect for running things like Hadoop to bringing in leading web 2.0 companies to partner with them on the solicitation.

    As one might expect, each team partnering with HP based their proposed solution on industry standard processors and/or industry standard co-processors. With our partners at least, there was no across-the-board bet on a single processor/co-processor vendor, with the three vendors that you might expect well represented across the set of proposals.

    Each proposal also spent time highlighting their data center efficiency. A number of different data center designs were taken by the different teams, but certainly common across all of them was HP’s HP Thermal Logic technology like Dynamic Power Capping, as well as other HP data center innovations.

    I’m sure everyone in the HPC industry will watch closely as the XD acquisition plays out over the coming months, to see which HPC centers, HPC vendors, and HPC technologies are selected by the NSF. No mater who is ultimately selected to delivery this system, the exercise of bidding, in itself, was valuable to HP. It deepened our relationship with a number of the leading HPC centers in the US and it advanced our own thinking on what we need to deliver in 2013, not just on $30M supercomputers but on entry level $3000 HPC systems. I want to thank everyone on the HP team, and there were countless HP employees who supported the proposal efforts, and also all of our partners, for all the hard work put in over the last weeks and months to bring together multiple high quality proposals.

    About these ads
  • About Marc Hamilton

    Marc Hamilton – Vice President, Solutions Architecture and Engineering, NVIDIA. At NVIDIA, the Visual Computing Company, Marc leads the worldwide Solutions Architecture and Engineering team, responsible for working with NVIDIA’s customers and partners to deliver the world’s best end to end solutions for professional visualization and design, high performance computing, and big data analytics. Prior to NVIDIA, Marc worked in the Hyperscale Business Unit within HP’s Enterprise Group where he led the HPC team for the Americas region. Marc spent 16 years at Sun Microsystems in HPC and other sales and marketing executive management roles. Marc also worked at TRW developing HPC applications for the US aerospace and defense industry. He has published a number of technical articles and is the author of the book, “Software Development, Building Reliable Systems”. Marc holds a BS degree in Math and Computer Science from UCLA, an MS degree in Electrical Engineering from USC, and is a graduate of the UCLA Executive Management program.
    This entry was posted in Uncategorized. Bookmark the permalink.

    One Response to HP and the NSF Extreme Digital (XD) Program

    1. Pingback: Flex Bits: HPC News with Snark for Friday, March 11, 2011 | insideHPC.com

    Comments are closed.