I’ve spent a few free minutes today browsing the first specifications released today by the Open Compute Project. One certainly has to applaud Facebook and other members of the Open Compute Project for sharing their work in building energy efficient data centers. While the concept of “open source” hardware may seem strange at first, it certainly isn’t new. Sun’s OpenSPARC project started in 2005 open sourced the UltraSPARC T1 and later UltraSPARC T2 designs. SPARC’s new owner doesn’t seem too interested in continuing the trend, at least it is not apparent on the OpenSPARC web site. While most of the press articles I’ve read on today’s Open Compute announcement focus on the project’s server designs, what really caught my attention was their datacenter designs.
Some of the Open Compute server design concepts, such as highly efficient power supplies, translate directly into and in fact are already used by purpose built HPC servers like the HP ProLiant SL390 G7. . While I’m sure the Open Compute servers are fine for web workload’s required by the likes of Facebook, they don’t include more advanced features like integrated GPU options or QDR IB InfiniBand that the SL390 G7 offers and more and more HPC customers require. Properly programmed, GPUs can provide not 20 or 40% energy efficiency gains but 200 or 400% or more.
On the other hand, nearly every HPC data center could benefit from design concepts in Open Compute’s data center specifications for electrical, mechanical, and battery backup systems. Unfortunately, building your own Open Compute server from the specs is probably a lot easier than replicating the data center electrical, mechanical, and battery systems to achieve the 1.07 PUE achieved by Facebook. You can, however, get many of the same data center efficiencies from an HP POD (Performance Optimized Datacenter).
The entire IT community needs to continue to press forward its Green initiatives on multiple fronts. The PUE metric is just the start. HP is already working with customers to maximize other aspects of eco friendly computing including:
ERE: Energy Re-Use Efficiency = Total Energy – Reuse Energy / IT Energy
CUE: Carbon Usage Effectiveness = Total CO2 Emissions / IT Energy
WUE: Water Usage Effectiveness = Annual Site Water Usage / IT Energy
all of which we hope will one day be as commonplace in data center design as PUE is today. Note, with PUE, ERE, CUE, and WUE, lower = better, i.e. the maximum benefits occur when you minimize the metric.
Consider just the CUE. A 40MW datacenter powered on coal emits 350,400 tons of CO2 in the atmosphere, on an annual basis, the equivalent carbon footprint of 87,600 people. The same 40MW datacenter, powered by hydro-electricity or wind power would emit just 1752 tons of CO2, annually (considering full lifecycle of electricity sources), the equivalent carbon footprint of 438 people.
I will definitely be following the Open Compute Project. Even if it remains focused on web workloads, there are benefits as discussed above for the HPC world. Who knows, maybe we will even see an Open Compute HPC server spec in the future. Amazon might even want to use that for their HPC Compute Instances!.