Infiniband networking, typically used in high end supercomputers, including nearly half of the Top500 fastest supercomputers, continues to address new markets outside of the traditional supercomputing space.
Just today, NYSE Technologies announced availability of a significant performance upgrade of its middleware platform, Data Fabric, and demonstrated a message rate of over a million 200 byte messages per second over QDR (40Gb/sec) Infiniband. And as Data Center Knowledge reported earlier in the week, Microsoft’s Bing Maps site is now running on an Infiniband network.
It is easy to argue that 10Gb and new 40Gb ethernet technologies have broader market reach than Infiniband, and in fact the Mellanox CX2 cards used in the NYSE Technologies benchmark support both Infiniband and 10Gb ethernet, but Infiniband still has a clear performance advantage today when low latency is a key requirement. Meanwhile, Mellanox and Infiniband vendor Qlogic aren’t standing still. Mellanox is already selling CX3 kit supporting 40Gb Ethernet and FDR (56Gb/s) Infiniband, albeit to take full advantage of FDR you need to wait for new next gen PCIe-Gen3 compatible CPUs and servers to become available.
HP was an early adopter of Infinband technology and in fact designed Infiniband onto the motherboard of both our ProLiant SL390s G7 server and our ProLiant BL2x220c G7 server blade. In these designs, we used the Mellanox CX2 chipset and thus both products support 40Gb QDR Infiniband or 10Gb Ethernet without expensive add-on cards.
So no matter if your interests are in building one of the fastest supercomputers in the world, geolocation services, or simply trading stocks blindingly fast, HP ProLiant SL390s and BL2x220c servers are ready for the task. In fact, the world’s fifth fastest supercomputer, Tokyo Tech’s TSUBAME 2.0, connects each compute node to not one but two QDR Infiniband networks. Now that’s high performance!