Technology Driven Business Models (aka unlimited storage, FLOPS, & bandwidth)

Recently, Amazon announced unlimited storage on their Cloud Drive service for about $5 a month. The Microsoft Cloud and Google also offer similar (although not unlimited) personal cloud storage services. Cloud Drive is great for individuals, and all the large public clouds have plenty of commercial, industrial-strength cloud storage offerings too. Some large organizations are even banding together to create their own cloud-like storage which can be more highly tailored to their performance and other specific requirements. James Cuff, Harvard University Assistant Dean for Research Computing points out on Twitter how the university, as part of the Massachusetts Green High Performance Computing Center (MGHPCC), makes it easy to use similar cloud storage.

Of course as MGHPCC executive director John Goodhue points out, making it fast and easy isn’t just about having large amounts of storage but also requires fast network connections, and the right networking protocols and software.

A similar story is taking shape in computation. Sun Microsystems launched one of the first public clouds in 2005 with a basic $1 per CPU/hour and $1 per GB/month offering, a year before Amazon officially launchd AWS in 2006. Today Amazon and Softlayer already offer GPU instances in their public clouds. While not quite free, the latest NVIDIA Titan X GPU offers a whopping 7 TFLOPS of compute capability (single precision) for $999. One might ask how much would Amazon have to charge for their Prime service to offer unlimited FLOPS along with their unlimited storage Cloud Drive service?

Of course, just like Goodhue points out that storage requires networking and software to be useful, the same is true for FLOPS. You need fast connections (between the GPU and the rest of your server and data center) and you need the right software. Since last week’s announcement of Titan X, I’ve had several customers write to me praising the performance of the card, especially when combined with NVIDIA’s cuDNN deep neural network library. And new NVIDIA interconnects like NVLink will let next generation GPUs accelerate applications even more.

Storage, FLOPS, network switches all have one thing in common, they require power to move data. Already, on a modern processor, the average power used by the actual floating point unit is actually less than the power required to move the operands from memory to the floating point unit, and then to move the result back again. If Amazon charged the right amount for moving data around on a processor, it wouldn’t be too hard to offer the FLOPS themselves for free. Don’t worry, Amazon doesn’t charge consumers extra to move data into and out of your Cloud Drive. Although on a larger scale, Amazon and all major public clouds charge for bandwidth into and out of their data centers for most commercial services.

Technology advances have allowed both the providers and the users of public clouds to develop innovative business models. But we are still only at the start of cloud adoption. New technology like NVIDIA shared virtual GPU (vGPU) allow not just storage and computation to be moved into the cloud, but a user’s entire desktop. Anyone who has used a Chromebook has had a flavor of the productivity locked up in the hundreds of millions of desktop users who have been turned into unwilling system administrators. Being able to deliver a full designer or power-user desktop, rich workstation-class 3D graphics, to any laptop, tablet, or smartphone will drive a new wave of enterprise and public cloud adoption.

Advertisements

About Marc Hamilton

Marc Hamilton – Vice President, Solutions Architecture and Engineering, NVIDIA. At NVIDIA, the Visual Computing Company, Marc leads the worldwide Solutions Architecture and Engineering team, responsible for working with NVIDIA’s customers and partners to deliver the world’s best end to end solutions for professional visualization and design, high performance computing, and big data analytics. Prior to NVIDIA, Marc worked in the Hyperscale Business Unit within HP’s Enterprise Group where he led the HPC team for the Americas region. Marc spent 16 years at Sun Microsystems in HPC and other sales and marketing executive management roles. Marc also worked at TRW developing HPC applications for the US aerospace and defense industry. He has published a number of technical articles and is the author of the book, “Software Development, Building Reliable Systems”. Marc holds a BS degree in Math and Computer Science from UCLA, an MS degree in Electrical Engineering from USC, and is a graduate of the UCLA Executive Management program.
This entry was posted in Uncategorized. Bookmark the permalink.