Architecture Does Matter

“Architecture Does Matter”, my favorite quote of the day, from UCLA’s Scott Friedman, Chief Technologist at UCLA’s Institute for Digital Research and Education. With students returning to campus for the first day of Winter quarter classes, it was one of those picture perfect Westwood mornings with blue sky and temperatures in the 70’s. Scott was talking about some of the new HP SL390s GPU-enabled servers he is bringing online for a new GPU Programming Bootcamp being held on campus. Scott’s team already manages an SL390s with 288 Nvidia GPUs which was recently upgraded to the latest Nvidia M2090 GPUs. For the GPU Programming Bootcamp, Scott is commissioning several additional SL390s servers with 3 M2090’s each as well as an SL390s with 8 M2090 GPUs. Because as Scott said, “Architecture Does Matter”.

Of course, a lot of computer scientists beyond UCLA think the same thing. The phrase “co-design” is rapidly being adopted in many circles, including the DOE’s Exascale program, to express the belief that high performance software needs to be increasingly aware of the underlying hardware architecture. For many reasons ranging from the proliferation of scripting languages to the performance increases in standard x86 processors, programmers over the last decade or two have often focused on productivity and not worried much about how the underlying architecture impacts performance. Today that is no longer the case. As more and more FLOPS become available to the programmer thanks to Nvidia GPUs and other acceleration technologies like Intel’s upcoming MIC processors, other aspects of system architecture become increasingly important. For instance, as the power and cost of a FLOP approaches zero, data movement becomes a driving cost and power usage factor which will impact important future server design decisions ranging from memory architecture to networking.

As Scott described this morning, “I want to make sure students understand the differences in programming with different numbers of GPUs”. The students will be programming in CUDA, in OpenCL, and also using the new OpenACC directives which promises to be one of the simplest ways yet* to program GPUs.

*The back page of this month’s IEEE Spectrum magazine had an advertisement for Mathworks MATLAB that said something to the effect of “if you can program a For loop you can write parallel code … that runs on GPUs”. MATLAB is a great tool, and is in fact a great way to harness the power of GPUs. OpenACC, on the other hand, is intended for developers writing in traditional programming languages such as Fortran, C, or C++. But who knows, I suppose there is no reason that MATLAB could not adopt OpenACC directives in the future.

If you can’t get to UCLA to enjoy the beautiful weather today, you can at least check out the UCLA Visualization Portal and learn a bit more about what UCLA is doing with HP servers and Nvidia GPUs.

Advertisements

About Marc Hamilton

Marc Hamilton – Vice President, Solutions Architecture and Engineering, NVIDIA. At NVIDIA, the Visual Computing Company, Marc leads the worldwide Solutions Architecture and Engineering team, responsible for working with NVIDIA’s customers and partners to deliver the world’s best end to end solutions for professional visualization and design, high performance computing, and big data analytics. Prior to NVIDIA, Marc worked in the Hyperscale Business Unit within HP’s Enterprise Group where he led the HPC team for the Americas region. Marc spent 16 years at Sun Microsystems in HPC and other sales and marketing executive management roles. Marc also worked at TRW developing HPC applications for the US aerospace and defense industry. He has published a number of technical articles and is the author of the book, “Software Development, Building Reliable Systems”. Marc holds a BS degree in Math and Computer Science from UCLA, an MS degree in Electrical Engineering from USC, and is a graduate of the UCLA Executive Management program.
This entry was posted in Uncategorized. Bookmark the permalink.