GPU Technology Conference Wrap-Up

It has been a busy week at Nvidia’s GPU Technology Conference, mostly filled with customer meetings for me. In this wrapup blog I’ll focus on some personal observations and try to extract a few trends rather than try to document the week’s product announcements that have been well covered elsewhere.

I’ve been coming to this conference for many years, and each year there is notably greater attendance by a broader audience. An Nvidia employee shared pre-registration numbers with me, showing increases over last year, and the show floor seemed considerably more crowded than even the pre-registration numbers hinted at. Despite growth in attendance, the conference continues to stay true to it’s technical roots, with many excellent technical sessions. Walking through the show floor to peruse my competitors’ wares, booth conversations seemed dominated by technical discussions, vs lookie loos. There was also a marked increase in displays and presentations outside of the core workstation and server markets that domindated the first and second phases of GPU adoption. Specifically in automotive technology, Audi, BMW, Tesla, and several exotic manufacturers had their vehicles with Nvidia-powered infotainment systems spread around the conference center with several offering test drives to conference attendees (sorry, the McLaren and Lamborghini were not available for test drives).

As expected, Nvidia shared a few high level details on future high-end Tesla GPUs used in servers. Stacked memory in the next gen + 1 Volta processor seemed to draw the most press coverage and interest, although stacked memory in reality is an industry trend that many are working on so while interesting to see Nvidia throw their weight behind, this isn’t in itself a novel idea. Personally I was much more excited about some of the announcements Nvidia made on the mobile front with their Tegra line, specifically full CUDA and OpenGL support in the Logan or Tegra-5 chip. Promising a Logan chip “the size of dime”, Nvidia is one of several ARM vendors coupling interesting computational add-ons to their ARM cores. While not aimed at CUDA workloads, TI’s KeyStone II ARM + DSP chip takes a similar approach. While traditional server-focused processors like Xeon or Tesla are not going away, it is hard to argue against the sheer numbers presented by the mobile market, which will require billions of processors a year vs millions a year for the server space. With the right packaging, some of these mobile chips are likely to find their way into server platforms for specific workloads.

The market share growth of any hardware platform is really a three step cycle, both in the consumer/mobile market and the enterprise/server market. Step 1: end users tend to pick their hardware platform based more on the applications available on the platform than because of specific hardware features. Step 2: software developers, in turn, pick the hardware platforms they are going to write to and support based on the platform volume which represents their potential market. Step 3: the more apps are developed for a hardware platform, the more users the platform attracts. Now go to step 1 and repeat.

Nvidia’s vision of a CUDA platform stretching from dime sized Logan chips to super high end Volta stacked memory powered supercomputers maximizes the potential market size for CUDA application developers. And while no server vendor has yet to announce a Logan based server, as more and more CUDA applications are developed, there are likely to be interesting server use cases for Logan and other mobile processors. Already, HP with Project Moonshot has discussed servers using ARM and Atom mobile processors. As the three step process I outlined above continues to play out for CUDA, I expect future GPU Technology Conferences will have many vendors displaying not only Tesla based servers but interesting Tegra based servers as well.

As much as I enjoy thinking through the business model impacts of technology announcements, perhaps the most rewarding and refreshing parts of attending a show like the GPU Technology Conference is talking to the end users and developers using the technology. Stanford University’s use of GPU technology to advance Alzheimer’s Research, which was highlighted at the show, is one great example. Yesterday evening I spent at least an hour, I lost track of the time, with an Nvidia researcher and one of the lead developers of Gromacs discussing how the Gromacs team has steadily increased performance by enabling key parts of the code, first on single GPUs and most recently for multi-GPU performance. The developer, based in Sweden, is a scientist by training and today his work is completely supported by government and private grants. “I participated in a few start-ups” he told me, “but I decided I much more enjoyed being a scientist and working on Gromacs than starting businesses, not that there is anything wrong with the latter”. Adding, “living in Sweden, we have a social system that lets me do my job and worry about being a scientist rather than worrying about how to pay for healthcare or sending my kids to college”. Maybe some non-technology lessons there for other countries.

So once again a great week and a great conference. While some observers of the information technology market may clamor that there isn’t a lot of innovation going on these days in how enterprises process their payroll and accounts receivables, and one report after another expounds on the death of the traditional PC, Nvidia seems to be pointing their research and development straight at the heart of the mobile and high performance computing markets which continue to be a hotbed of innovation and growth.


About Marc Hamilton

Marc Hamilton – Vice President, Solutions Architecture and Engineering, NVIDIA. At NVIDIA, the Visual Computing Company, Marc leads the worldwide Solutions Architecture and Engineering team, responsible for working with NVIDIA’s customers and partners to deliver the world’s best end to end solutions for professional visualization and design, high performance computing, and big data analytics. Prior to NVIDIA, Marc worked in the Hyperscale Business Unit within HP’s Enterprise Group where he led the HPC team for the Americas region. Marc spent 16 years at Sun Microsystems in HPC and other sales and marketing executive management roles. Marc also worked at TRW developing HPC applications for the US aerospace and defense industry. He has published a number of technical articles and is the author of the book, “Software Development, Building Reliable Systems”. Marc holds a BS degree in Math and Computer Science from UCLA, an MS degree in Electrical Engineering from USC, and is a graduate of the UCLA Executive Management program.
This entry was posted in Uncategorized. Bookmark the permalink.