Today I continue my China HPC Perspectives series by talking about what some of the Chinese IT companies are up to in the HPC space. Even before the recent announcement by Lenovo to acquire IBM’s x86 server business, local Chinese vendors were capturing an increasingly large share of the domestic server market. According to this China Economic Net article local manufacturers now account for 40% of the server market which IDC reports is growing revenue at a 30% a year.
Lets take a look at the Chinese IT giant Huawei. While Huawei isn’t a big name in the HPC space, they have been steadily adding HPC capabilities for the new style of commercial HPC applications like machine learning that I discussed in Part 1 of this series. If you still think Huawei only makes telco-class routers, you clearly haven’t been reading The Register. Back in 2012, Huawei took a big step towards paying attention to the HPC server market with the introduction of InfinBand options for their E9000 blade servers. Then last year Huawei followed up by announcing plans to work with NVIDIA on GPU virtualization. While Huawei is expanding in many markets with HPC requirements, their original core customer base, large telcos, is also looking to HPC and big data applications running on GPUs. I expect Huawei took notice of this press release on how European telco Orange is using NVIDIA GPUs to power new big data apps. I wonder if Orange is a Huawei customer?
As in other parts of the world, all of the Chinese server vendors I met with last week were working on ARM server designs. A number of Chinese firms are ARM licensees and I expect we will see multiple server-class ARMv8 64-bit processors come out of China in the next two years. Last month’s NVIDIA NVLink announcement is especially interesting to ARM processor and server vendors. With NVLink, server vendors can connect their own ARM processor via a high speed NVLink channel to one or more NVIDIA GPUs and offer innovative designs for specific HPC and machine learning workloads without being tied to the traditional 2-socket and PCIe bus server design. In addition to ARM, there is also Chinese interest in building OpenPOWER based servers with NVIDIA GPUs. Baidu is likely to have quite a few compelling new local sources over the next few years for GPU powered systems to run their machine learning algorithms on.
A large part of China’s success in the IT market has been supported by an ever growing base of open and industry standards. While only a relatively small number of users have access to the 7000+ GPUs in Tianhe-1A, computer science students at virtually every Chinese university can write CUDA code on PCs and laptops with NVIDIA GeForce GPUs or by accessing GPUs in the cloud. Many of those Chinese clouds run on OpenStack. No doubt one of the reasons Huawei is a Gold Member of the OpenStack foundation, supporting the foundation at the same financial level as Cisco.
It will be interesting to watch the global HPC market continue to evolve over the next few years. There is no shortage of demand for ever more powerful traditional scientific processing to forecast the weather more accurately or develop new life-saving medications. But increasingly, new commercial HPC workloads such as machine learning promise to require equal if not greater HPC capabilities. Of course key to supporting this increased demand is improved energy efficiency, as tracked by the Green500 list. With ten of the top 10 systems on the current Green500 being powered by NVIDIA GPUs, we know a little about energy efficiency. And of course China is not alone in seeking to increase the locally development content of new energy efficient HPC systems. Europe, Japan, Korea, and others including of course the US continue to develop their own HPC hardware and software technologies, helped along by new and open and industry standards such as ARM and OpenStack.
As the world leader in visual computing, NVIDIA looks forward to working with every nation in the world to continue to drive forward innovation in HPC, in an open, collaborative environment. When a student in any country can learn to program CUDA, be competitively admitted to one of the county’s best universities, and get competing job offers from NVIDIA and Baidu, innovation happens. When server vendors can connect an NVIDIA GPU to their processor of choice without speed limits, innovation happens. When cloud computing brings the power of the world’s largest supercomputers to everyday users, innovation happens.
So with my last sunset for this trip to China,
I’m more excited than ever about the future of HPC.