The next wave of cloud computing requires a distributed network architecture. How will networks adapt?
Since the birth of computing, networks have evolved alongside compute architecture. From mainframes to client servers to branch office computing, networks have had to keep pace by undergoing their own evolutionary processes. Today, compute has become highly distributed and that’s driving yet another major shift: to remain effective, networks now must be distributed.
Distributed Clouds Require Distributed Networks
For the past decade, centralized clouds have been the norm, deployed as a public or private cloud. Innovation at the edge has given rise to distributed cloud computing, where a compute fabric will span public, private and edge locations. This lets businesses move data and workloads closer to the user, improving customer and employee experiences.
Now that computing in the cloud has taken an evolutionary step, network infrastructure is actively transitioning to a distributed model that enables it to be highly agile and move at the speed of the cloud. In addition, modern security techniques, such as zero trust, need to shift to a network-centric model to protect distributed data, workloads and compute resources.
Pluribus Extends its Fabric to NVIDIA DPUs
Despite the network’s evolution, today’s organizations continue to face many networking challenges related to fragmented networks, proprietary solutions, and high operating costs. Pluribus Networks believes it has found a solution to these challenges by developing a solution designed for distributed clouds. The vendor recently extended its switch fabric to NVIDIA’s BlueField data processing units (DPUs) to reduce the workload on central processing units (CPUs) across distributed infrastructure.
Approximately 25 percent of workloads will be in the public cloud by 2023, while the majority (75 percent) of workloads will remain in private (non-hyperscale) environments, according to data in Pluribus’ 2021 State of Data Center Networking report. The report found that the top two challenges of cloud networking are network architecture complexity and network operations complexity, both due to fragmented, incomplete solutions.
NVIDIA and Pluribus Share a Common Vision
Pluribus has been working with NVIDIA for the past year to bring its vision for “unified cloud networking” to life. That is, to deliver unified, simplified, secure networking across distributed clouds. The foundation is Pluribus’ Unified Cloud Fabric, the next phase of its Adaptive Cloud Fabric, which provides unified underlay and overlay networking with built-in visibility and software-defined networking (SDN) automation.
Unified Cloud Fabric is powered by Netvisor ONE, Pluribus’ Linux-based network operating system. Pluribus has ported Netvisor ONE to BlueField, providing a common OS across switches and DPUs. As a result, storage, networking, security, and management workloads are offloaded from traditional servers.
DPUs Lessen the Load on Servers
“The DPU is like a mini server and switch,” Mike Capuano, Pluribus’ CMO, told ZK Research in an interview.
“It’s doing the same things that have always been done on a top-of-rack switch, but now in a compact form factor that lives in the server, and with even more processing power and hardware acceleration for offloading network and security functions than you will find in most switches. There’s so much power in this little package now. We can do all these sophisticated things in a much more highly distributed way.”
Having a common OS across switches and DPUs provides a single point of management from any node in the network. For example, if an organization decides to deploy a new service like a virtual local area network (VLAN), it can do so from anywhere in the switch. The service is propagated throughout the network and remains independent of the management performed on the server.
When networking is enabled in the DPU in the server, there’s no need for a switch or any other network hardware besides the DPU running in each server. It can live in any environment where an organization deploys a server. Therefore, if an organization has hundreds of DPUs deployed, it can reduce service delivery time instead of manually configuring box-by-box.
With this approach, Pluribus wants to drive the networking functions out to DPUs in servers in order to create a true zero trust environment, where segmentation is done at the individual application level, but without sacrificing performance and user experience. Existing virtual firewalls that bring security closer to apps are both costly and impact CPU performance. The DPU approach is a better way of enabling distributed security and networking, Capuano said.
“There are many solutions that protect against hackers, who can get inside perimeter security. Right now, there are appliance-based models and pure software models that both have tradeoffs. We think we can do a better job,” Capuano added.
DPUs Deliver Better Cost Efficiencies
Organizations that have adopted a DPU model can avoid software licenses and the cost of proliferating hardware appliances, since the functions are offloaded from the CPU. Industry leaders such as NVIDIA and Amazon have estimated that DPUs reduce the load on CPUs by 25 to 30 percent. They also provide consistent networking for any workload or virtualization environment.
However, most organizations will not have DPUs deployed everywhere in the next few years, or maybe ever. That’s where Pluribus’ concept of unified networking comes into play, delivering a common network fabric and operating model across both switches and servers via DPUs.
Capuano said several customers have already committed to using the solution. In one customer use case, a provider of integrated solutions for telecom companies has developed its own virtualized application stack to transfer data out of satellite ground stations. The provider is able to offload the virtualized software-based networking functions onto DPUs for better performance.
Pluribus is starting early field trials of its Unified Cloud Networking in late April. The ultimate goal is to tackle fragmented environments by unifying networking across multiple dimensions (switches and servers) while providing distributed security and visibility.