NVIDIA Eases The Complexity Of Robot Training

This syndicated post originally appeared at Zeus Kerravala, Author at eWEEK.

At GTC, NVIDIA announces a number of product updates to broaden the use of robots.

This week, GPU market leader NVIDIA is holding its virtual GPU Technology Conference (GTC). Over the years, GTC has evolved from a graphics and gaming show to an industry event dedicated to all things artificial intelligence (AI).

One of the major use cases for AI and NVIDIA technology is in the area of robotics. This includes the entire lifecycle of robot development including training, simulation, building, deployment and management.

NVIDIA’s Isaac robotics platform is accelerating the way intelligent robots are developed, deployed, and managed. The end-to-end solution aims to optimize the efficiency of robot fleets across different environments.

At GTC, NVIDIA made several major announcements related to the Isaac robotics platform, which are set to enhance the fields of robotic simulation, synthetic data generation, and more. Here is a recap:

Isaac Sim on Omniverse Cloud

NVIDIA made its Isaac Sim robotic simulation platform available on Omniverse Cloud for enterprises.

Omniverse Cloud is NVIDIA’s platform-as-a-service (PaaS) offering for compute-intensive workloads, such as synthetic data generation and software validation. By integrating Isaac Sim with Omniverse Cloud, NVIDIA now offers access to its software development kits (SDKs) and tools. Other NVIDIA products like Drive and Universal Scene Description (USD) Composer will also be available on Omniverse Cloud for enterprises.

NVIDIA already offers various options for running Isaac Sim in the cloud. One option is to deploy Isaac Sim containers from the NGC repository—a catalog of AI frameworks—on any cloud service provider, which requires some setup and management by the enterprise. Another option is to use AWS RoboMaker and run a fully managed version of Isaac Sim.

Gerard Andrews, Senior Product Marketing Manager of Robotics at NVIDIA, emphasized the value of cloud computing in robotics, particularly when it comes to scalability of model training: “You’re always going to be training models, updating these models, and will continue to need a methodology to revalidate those models. We believe that type of interaction with the simulator is best done in a cloud infrastructure.”

Isaac ROS Enhancements

Isaac robot operating system (ROS) is a collection of GPU-accelerated modules aimed at providing perception AI to the robotics development community. The latest release features a new grid localizer package that uses LiDAR to locate robots on a map, an updated Envy Blocks 3D reconstruction software with a people detection mode, and open-sourced modules. The release also supports the Jetson Orin NX and Orin Nano modules.

The development of robots is a complex process involving engineers from different disciplines. Therefore, it’s important to have collaborative tools that provide access to the same datasets and simulation for all team members. Using the cloud, simulations can be accessed from anywhere, even without a GPU-accelerated computer.

Jetson Portfolio Expansion

NVIDIA has expanded its Jetson portfolio of computing boards with a new addition, the Jetson Orin Nano entry-level developer kit, which can be pre-ordered at $499.

This developer kit leverages the 1024-core Ampere GPU and supports a wide range of AI models, including transformers. It also offers performance and energy efficiency for cloud-to-edge workloads, which is a concept that involves two computers working together to optimize robot performance.

Jetson (the first computer) acts as the robot’s brain or control center. The second computer exists in the cloud and is used for training, simulating, and continuously testing the software that runs on the robot. By combining the power of both computers, cloud-to-edge technology enables intelligent robotics.

Jetson consists of six modules, which are now all available for production use. The modules are divided into three main categories:

  • The high-computing Jetson AGX Orin, in 64 GB and 32 GB for complex edge AI workloads
  • The Jetson Orin NX in 16 GB and 8 GB
  • The Jetson Orin Nano in 8 GB and 4 GB, the new entry-level option for robotics

TAO 5.0 Release

NVIDIA rolled out TAO 5.0, a low-code AI training toolkit, which brings sophisticated vision and model development to any data scientist, cloud service, or device. TAO 5.0 paves the way for more efficient AI integration across various cloud services and automates the tedious process of dataset curation. It also comes with auto machine learning (ML) capabilities, making it easier to integrate ML into any cloud service.

The release of TAO 5.0 is a major expansion for the vendor, said Adam Scraba, NVIDIA’s Director of Product Marketing. It democratizes AI development and makes it more accessible to a wider audience.

Metropolis Platform Advancements

NVIDIA is working on making computer vision more accessible through Metropolis, an end-to-end development workflow that automates physical spaces—such as buildings, factories, warehouses, and smart cities—with AI and data analytics.

NVIDIA customers like Procter and Gamble, PepsiCo, BMW, and Siemens have already adopted Metropolis to streamline operations. With the latest expansion, Metropolis now includes a low-code AI training toolkit, which is expected to promote computer vision and AI adoption in various industries.

Additionally, NVIDIA has partnered with ST Microelectronics to bring sophisticated visionary models to microcontrollers on a large scale. The goal is to improve computer vision in various industrial settings, such as factories, warehouses, and supply chains.

“We’re further pushing accessibility of these latest vision capabilities. We’re packaging up really sophisticated solutions to effectively solve these grand computer vision challenges in the form of API-driven services,” said Scraba.

By providing powerful tools, workflows, and accessible AI development solutions, NVIDIA is contributing to the advancement of robotics across industries. These advancements are expected to not only improve efficiency but also pave the way for safer and more reliable human-robot interactions in the future.

While NVIDIA makes excellent silicon, its strength is its ability to put together “full stack” solutions, making deployment much easier. Instead of having to cobble together parts from different companies or even across the NVIDIA portfolio, developers have access to everything they need. The work in robotics is an excellent example as it requires more than building the hardware. There are training issues, simulations, and cloud integration challenges. NVIDIA has taken much of the complexity out of the process, enabling companies to focus on use cases rather than technology integration.

Author: Zeus Kerravala

Zeus Kerravala is the founder and principal analyst with ZK Research. Kerravala provides a mix of tactical advice to help his clients in the current business climate and long term strategic advice.