Nvidia releases Blackwell platform to go back to the future, extends partnership with AWS for scale

This syndicated post originally appeared at Zeus Kerravala – SiliconANGLE.

As Nvidia Corp.‘s annual developer conference GTC kicked off this week in San Jose, the company made its usual flurry of product announcements, the highlight of which was the long-awaited Blackwell platform. One might look at Blackwell as a graphics processing unit, but in reality, it’s more than that, hence the “platform” descriptor.

Blackwell is interesting because it blurs the lines between chips and systems. A single chip is actually two chips connected. The GB200 Blackwell Superchip is two Blackwells and a Grace central processing unit on a board, creating one massive chip. That can be used then to build a single system, a cluster or even a rack. The “magic” is that there is no performance loss in sending information between the chips regardless of configuration.

The company touts its ability to help companies “build and run real-time generative AI on trillion-parameter large language models.” Blackwell should be able to do that at a small fraction of the cost and energy consumption of the previous platform. In some ways, Blackwell is a return to the past for Nvidia. During a Q&A with analysts, Chief Executive Jensen Huang (pictured) discussed this.

“We are going back to where we started,” he said. “Blackwell generates content, just like our GPUs. They were used to create graphics. This is different in that it creates content of all types for everyone. Blackwell was built for the generative AI era.”

In addition, the company announced that Amazon Web Services Inc. will offer Nvidia Grace Blackwell GPU-based Amazon EC2 instances and Nvidia DGX Cloud.

Specifics of each announcement include the following:

The Blackwell platform

Nvidia stated that Blackwell builds on six revolutionary technologies that can scale up 10 trillion parameters, including:

  • A chip with 208 billion transistors (the world’s most powerful) that uses a custom-built 4NP TSMC process with two-reticle limit GPU dies. It’s connected by a 10-terabits-per-second chip-to-chip link into a single, unified GPU.
  • Support for double the compute and model sizes with new four-bit floating point AI inference capabilities.
  • Accelerated performance for multitrillion-parameter and mixture-of-experts AI models.
  • A dedicated engine for reliability, availability and serviceability.
  • Protection for AI models and customer data without performance compromise.
  • A dedicated decompression engine that supports the latest formats for high-performing data analytics and data science.

Expanding cooperation with AWS

Looking to democratize AI, Nvidia will work closely with AWS to offer the Blackwell platform. Blackwell, combined with Amazon’s EFA networking, Nitro virtualization and EC2 UltraClusters, will give customers the ability to scale up to thousands of GB200 Superchips, speeding inference workloads for resource-intensive, multitrillion-parameter language models. But because it’ offered as an AWS service, customers can start with a relatively modest environment versus plunking down the money for a DGX server.

Also, Project Ceiba, which Nvidia and AWS announced in 2023 as a play to build one of the world’s fastest supercomputers, will use Blackwell, hosted on AWS. Ceiba will have 20,736 B200 GPUs using the new Nvidia GB200 NVL72 and fifth-generation NVLink connected to 10,368 Nvidia Grace CPUs.

The company says its R&D teams will use Ceiba to advance AI for large language models, graphics (image/video/3D generation) and simulation, digital biology, robotics, self-driving cars, Nvidia Earth-2 climate prediction, and more to help Nvidia propel future generative AI innovation.

AWS and Nvidia are also developing new applications, including Nvidia BioNeMo foundation models for generative chemistry, protein structure prediction and understanding how drug molecules interact with targets. Nvidia will make these models available on AWS HealthOmics, which “helps healthcare and life sciences organizations store, query, and analyze genomic, transcriptomic and other omics data.”

The teams are collaborating to launch generative AI microservices for drug discovery, medtech and digital health.

Thoughts on Nvidia and AWS

Arguably, the competitive moat around Nvidia is so large that it would take another company years to disrupt it. It gets criticized for its vertical integration, which guarantees the performance required for AI. It has almost singlehandedly enabled a new era in computing, and AWS’ alignment with the company can help deliver the technology to the masses.

Sure, AI is a lot of hype right now, but I believe, using the baseball analogy, the pitchers are still warming up, and there is much more to come.

Author: Zeus Kerravala

Zeus Kerravala is the founder and principal analyst with ZK Research. Kerravala provides a mix of tactical advice to help his clients in the current business climate and long term strategic advice.