The Impact of Merchant Silicon in HPC

Andy Bechtolsheim doesn’t blog often but when he does, his blogs are definitely worth reading. In his blog this week on The March to Merchant Silicon in 10Gbe Cloud Networking Andy talks about Intel’s plans to acquire Fulcrum which today builds 10G networking chips. The benefits of 10Gbe and merchant silicon apply equally to HPC networking as they do to cloud networking and in fact echos the trend first popularized by HPC sites over a decade ago as customers adopted so called merchant silicon for the CPU in HPC servers.

With few exceptions, when building an HPC server today the server vendor goes out to Intel, AMD, and Nvidia and picks the right mix of CPUs and GPUs to meet the performance characteristics they are trying to meet. Anyone familiar with processor design would certainly acknowledge that this standardization has accelerated innovation at a faster pace then when IBM, Sun, DEC, HP, and others all competed with their own proprietary processor designs. But as Andy points out in his blog, the same level of innovation has been slow to come to the networking world, witness Andy’s Cisco Catalyst 6509 vs Arista compare. Some of the recent well publicized news around Cisco comes as no surprise when you consider they are being challenged in their traditional enterprise networking space by the likes of HP Networking as well as a host of new companies like Arista and the merchant silicon choices they benefit from.

Of course, Infiniband remains a popular choice for HPC clusters, and isn’t going away, and benefits from much the same merchant silicon design. To me it isn’t a matter of IB vs 10Gbe for HPC, it is IB and 10Gbe, to the rapid demise of 1Gbe as a cluster interconnect over the near-term. Much of the HPC code in existence is designed around FLOPs to network bandwidth and FLOPs to IO bandwidth ratios, and not only are today’s multi-core CPUs causing the FLOPs side of the equation to outgrow 1GE, but the common inclusion today of GPUs and SSD/Flash in HPC compute nodes accelerates even more the need for higher speed networking. HP identified this trend early on and in fact was one of the first to build in a combination of both 10Gbe and IB as standard on our ProLiant SL390s server. If you are still buying an HPC server that comes with 1Gbe as standard you should really be asking yourself why.

About these ads

About Marc Hamilton

Marc Hamilton – Vice President, Solutions Architecture and Engineering, NVIDIA. At NVIDIA, the Visual Computing Company, Marc leads the worldwide Solutions Architecture and Engineering team, responsible for working with NVIDIA’s customers and partners to deliver the world’s best end to end solutions for professional visualization and design, high performance computing, and big data analytics. Prior to NVIDIA, Marc worked in the Hyperscale Business Unit within HP’s Enterprise Group where he led the HPC team for the Americas region. Marc spent 16 years at Sun Microsystems in HPC and other sales and marketing executive management roles. Marc also worked at TRW developing HPC applications for the US aerospace and defense industry. He has published a number of technical articles and is the author of the book, “Software Development, Building Reliable Systems”. Marc holds a BS degree in Math and Computer Science from UCLA, an MS degree in Electrical Engineering from USC, and is a graduate of the UCLA Executive Management program.
This entry was posted in Uncategorized. Bookmark the permalink.