The Open Compute Project & HPC

I’ve spent a few free minutes today browsing the first specifications released today by the Open Compute Project. One certainly has to applaud Facebook and other members of the Open Compute Project for sharing their work in building energy efficient data centers. While the concept of “open source” hardware may seem strange at first, it certainly isn’t new. Sun’s OpenSPARC project started in 2005 open sourced the UltraSPARC T1 and later UltraSPARC T2 designs. SPARC’s new owner doesn’t seem too interested in continuing the trend, at least it is not apparent on the OpenSPARC web site. While most of the press articles I’ve read on today’s Open Compute announcement focus on the project’s server designs, what really caught my attention was their datacenter designs.

Some of the Open Compute server design concepts, such as highly efficient power supplies, translate directly into and in fact are already used by purpose built HPC servers like the HP ProLiant SL390 G7. . While I’m sure the Open Compute servers are fine for web workload’s required by the likes of Facebook, they don’t include more advanced features like integrated GPU options or QDR IB InfiniBand that the SL390 G7 offers and more and more HPC customers require. Properly programmed, GPUs can provide not 20 or 40% energy efficiency gains but 200 or 400% or more.

On the other hand, nearly every HPC data center could benefit from design concepts in Open Compute’s data center specifications for electrical, mechanical, and battery backup systems. Unfortunately, building your own Open Compute server from the specs is probably a lot easier than replicating the data center electrical, mechanical, and battery systems to achieve the 1.07 PUE achieved by Facebook. You can, however, get many of the same data center efficiencies from an HP POD (Performance Optimized Datacenter).

The entire IT community needs to continue to press forward its Green initiatives on multiple fronts. The PUE metric is just the start. HP is already working with customers to maximize other aspects of eco friendly computing including:

ERE: Energy Re-Use Efficiency = Total Energy – Reuse Energy / IT Energy

CUE: Carbon Usage Effectiveness = Total CO2 Emissions / IT Energy

WUE: Water Usage Effectiveness = Annual Site Water Usage / IT Energy

all of which we hope will one day be as commonplace in data center design as PUE is today. Note, with PUE, ERE, CUE, and WUE, lower = better, i.e. the maximum benefits occur when you minimize the metric.

Consider just the CUE. A 40MW datacenter powered on coal emits 350,400 tons of CO2 in the atmosphere, on an annual basis, the equivalent carbon footprint of 87,600 people. The same 40MW datacenter, powered by hydro-electricity or wind power would emit just 1752 tons of CO2, annually (considering full lifecycle of electricity sources), the equivalent carbon footprint of 438 people.

I will definitely be following the Open Compute Project. Even if it remains focused on web workloads, there are benefits as discussed above for the HPC world. Who knows, maybe we will even see an Open Compute HPC server spec in the future. Amazon might even want to use that for their HPC Compute Instances!.

About these ads

About Marc Hamilton

Marc Hamilton – Vice President, Solutions Architecture and Engineering, NVIDIA. At NVIDIA, the Visual Computing Company, Marc leads the worldwide Solutions Architecture and Engineering team, responsible for working with NVIDIA’s customers and partners to deliver the world’s best end to end solutions for professional visualization and design, high performance computing, and big data analytics. Prior to NVIDIA, Marc worked in the Hyperscale Business Unit within HP’s Enterprise Group where he led the HPC team for the Americas region. Marc spent 16 years at Sun Microsystems in HPC and other sales and marketing executive management roles. Marc also worked at TRW developing HPC applications for the US aerospace and defense industry. He has published a number of technical articles and is the author of the book, “Software Development, Building Reliable Systems”. Marc holds a BS degree in Math and Computer Science from UCLA, an MS degree in Electrical Engineering from USC, and is a graduate of the UCLA Executive Management program.
This entry was posted in Uncategorized. Bookmark the permalink.

5 Responses to The Open Compute Project & HPC

  1. Pingback: Open Compute Project is Open Source for Greener Datacenters | insideHPC.com

  2. Michael Dinsmore says:

    “While I’m sure the Open Compute servers are fine for web workload’s required by the likes of Facebook, they don’t include more advanced features like integrated GPU options or QDR IB InfiniBand .”

    That’s exactly the point. If you’re buying servers by the thousands, it doesn’t make sense to include “advanced” features, or any features or functionality, that you don’t need for the kind of workload you’re going to put on the those machines. That’s just wasted silicon, which increases cost and power draw. I’m sure that’s bad news for salesmen that want to push those features, but single function machines are being driven towards simplicity, not complexity.

    • Michael,
      I think you missed my point, there are customers that need GPUs by the 1000’s, just take a look at http://www.top500.org list. All I’m saying is that the first Open Compute servers are a good start, but they are, by design, targeted at Facebook’s workload. There are many workloads, like scientific computing, that use 1000’s of servers and who’s users could benefit from a similar approach.

      • Martin Scholl says:

        OK, but what if some HPC users give EE students some money to extend FB’s designs with say InfiniBand or GPUs?
        Given the combined buying power such a trust could have, it would make sense from a budget perspective.

  3. Martin,
    That is a great idea. Not just because of the buying power if many schools used the design, but think how much time students, researchers, and professors waste today trying to add GPUs to servers that may have the PCI slot to connect a GPU but weren’t designed from the start for GPUs. Not only is a lot of time wasted trying to find the right Linux drivers, get the Bios to work with the GPU, and other configuration actions, but just because a GPU can be connected to a server doesn’t mean you are getting the best performance as the GPU may introduce bottlenecks in other parts of the system. In the meantime, HP’s ProLiant SL390 G7 server was designed from the ground up for GPU computing and that is one reason it is quickly becoming one of the most widely used GPU systems on campuses across the world.

Comments are closed.