Telecoms, Media & Technology is part of the Knowledge and Networking Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Informa

Nvidia Doubles Down on AI at Developer Conference


CEO Huang also announced hardware, partnerships including with AWS and Azure designed to democratize machine learning

By Kurt Marko for SDxE

SANTA CLARA -- The highlight of every technology vendor event is the CEO's keynote — part pep rally, part shining a spotlight on company strategy shifts, part a rolling stream of product announcements.

No one does this better than Nvidia co-founder and CEO Jensen Huang. For almost two and half hours this week, he kept attendees at the company's GPU Developer Conference (GTC) glued to their seats in a solo performance that went from evangelizing the potential of AI to change the world to technical details about Nvidia's latest accelerator chip.

The focus on AI and machine learning (ML) may be a surprise to those who haven't been paying attention to the company since Nvidia's debut as the premier builder of PC graphics cards. Indeed, the company's latest quarterly earnings, released the day before Huang's keynote, showed that Nvidia still derives over half its revenue from gaming. However, it's in the data center where Nvidia is growing the fastest, a result of an explosion in the use of AI algorithms to solve business problems and provide previously unknowable insights from business data. With Nvidia providing the GPU engine that powers most new ML software, these applications — not gaming — now drive the company's strategy.

The star of Huang's keynote was the expected announcement of its next-generation GPU platform, Volta, the successor to Nvidia's Pascal chips, released last year as the most powerful processors for AI and high performance computing (HPC). Like its predecessor, the Volta Tesla V100 is a huge chip — roughly the size of an Apple Watch —sporting 21 billion transistors, 5,120 CUDA cores (the computational building block of all Nvidia-based applications), and capable of 7.5 TFLOPS of floating point performance.

It has a development price tag to match: $3 billion.

The platform delivers new features designed to boost the performance of ML workloads. They include a new micro architecture, with a new set of so-called Tensor Cores, designed to optimize the performance of operations common to deep-learning neural nets; faster implementations of on-package high bandwidth memory (HBM2); and Nvidia's NVLINK2 high-speed interconnect for moving data among multiple GPUs or between GPU and CPU.

Volta's Impressive design specifications translate to substantial performance improvements on AI benchmarks of up to 5x the prior-generation P100. Combined with the ability to better cluster multiple GPUs, it allows V100 systems to do in minutes jobs that once took hours or days.

Calls To Action

Since exploiting GPUs requires new algorithms and applications, Nvidia always focuses on how to get the latest technology into the hands of researchers and developers. Thus, in conjunction with the V100, the company announced several new systems and services, including:

> Two DGX developer systems, the rack-mount DGX-1, sporting eight V100s and a liquid-cooled deskside tower, and the DGX Station with four V100s. Both will be available in Q3 priced at $149,000 and $69,000, respectively.

> Nvidia also updated its previously announced (at the Open Compute Summit) HGX-1 system, which is based on Microsoft's Project Olympus and designed for large cloud providers, to incorporate eight V100s.

> A fully containerized development stack designed to simplify deep-learning development by encapsulating all the necessary elements of a very complex AI programming platform into pre-built and supported Docker images. The development stack can be deployed on-premises using a GPU workstation, like the DGX, or in the cloud on a new Nvidia Cloud Platform.

Aside from building its own cloud service, Nvidia also announced that the containerized platform would be supported by AWS and Azure to create what Huang calls the "first hybrid computing platform for deep learning."

> And, in a sign that AI is fast moving from academia and the lab to business applications, Nvidia announced that it has worked with SAP to develop a new application, Brand Impact, that can sift through reams of social media and other data to assess the effectiveness of a company's advertising campaigns. According to SAP, the software is the first of many ML applications, such as a new customer support ticketing system, designed to add AI across SAP's product line.

Aside from the GPU hardware and services, Nvidia also announced new products for the automotive and robotics industries, including:

> A partnership with Toyota in which the automaker will use Nvidia's next-generation DRIVE PX platform in its future autonomous vehicles.

> A new robot simulator, Issac, using Nvidia's Jetson embedded GPU platform and designed to speed the development of robots and other intelligent edge devices.

> Project Holodeck for the development of immersive, VR collaboration environments.

Huang closed with a call to action, saying the industry is entering "a new era of AI computing." To enable that, Nvidia's Volta platform provides the next leap in performance to catalyze the development of previously infeasible ML applications. Furthermore, by releasing relatively affordable workstations, pre-integrated software development platforms and, in conjunction with major cloud providers, GPU-accelerated services, Nvidia hopes to democratize the technology and enable a broader audience to incorporate AI into their products and development.

Kurt Marko is an IT analyst, consultant and regular contributor to a number of technology publications.