After several dozen customer, press, analyst interviews, and lots of show floor walking over the last few days, I’ve come up with my list of favorite comments.
“A petaflop supercomputer costs about $100,000,000.” Hum. Not sure what that person’s reference point was. There are now seven systems on the Top500 list with > 1 PF sustained performance and I can assure you some of those have an acquisition cost much less than $100M. There are, of course, many factors that influencing the acquisition cost of a supercomputer system, including networking topology, type and size of storage, and software. In addition, operating costs, including power, operations staff, datacenter space, and maintenance, easily exceed acquisition cost over the life of the system. But one thing is for sure, costs of supercomputing at any level continue to become more and more affordable.
“GPU systems are just a fad, the system I’m building will be able to run any code.” Double hum. The reference point here was someone building a non-x86 based supercomputer. Last I checked,
Top500 processor statistics showed over 90% of systems on the Top500 list were based on some type of AMD or Intel x86 architecture. I’m not sure anyone has ever tried to count how many HPC programs are out there, but clearly the chances of you being able to run any arbitrary code is higher if you are running on an x86 platform. As far as GPUs and similar accelerated technology go, a quick look at Nvidia’s CudaZone, AMD’s Fusion technology, the growing use of OpenCL, and Intel’s upcoming MIC co-processor family, and its easy to see that we have moved way beyond the fad stage. There are surely many new programming challenges to be addressed with GPUs, but the bus has left the station.
“Rather than race the US & China to Exascale, Europe should focus on building a half dozen quarter-exaflop or half-exaflop systems in the same timeframe.” Since I agree with this one, I’ll even tell you who said it, none other than Earl Joseph of IDC in the European Supercomputing Study. HP is working on many exascale technologies, but we are not necessarily racing to build the first exascale system at all cost. What HP is focused on is getting to repeatable, sustainable exascale systems by the end of the decade and that means we will probably build several quarter and half exascale systems before then. IDC estimates, and again I agree, that building a half dozen quarter or half exascale systems would cost significantly less than building an exascale system, while providing significantly more total cycles for scientific reseach.
“1G ethernet will remain the volume Top500 interconnect until faster technologies become free on the motherboard.” Hum & yes. A quick check of the Top500 interconnect statistics shows that 1G has for the first time dropped below 50% adoption and now barely leads IB by 3 percentage points. But HP does recognize the need to integrate faster networking technologies onto the motherboard and that is exactly why several HP servers including the SL390s incorporate both 10G ethernet and 40G QDR IB using Mellanox’s ConnectX2 chip. By integrating ConnectX2 onto the motherboard, HP not only lowers the cost for customers but lowers the power usage versus traditional add-on NIC cards.
What’s the favorite thing you overheard at SC10? Comments welcome.
This entry was posted in HPC
. Bookmark the permalink