Vmware Embraces Dpus To Stretch The Use Of Cpus| ItSoftNews

While it is clearly early in the game, VMware has made a bunch of moves recently to ensure that DPUs and the smartNICs they enable are an equal part of enterprise networking environments of the future.

VMware is a leading proponent of using digital processing units to free-up server CPU cycles by offloading networking, security, storage, and other processes in order to rapidly and efficiently supporting edge- and cloud-based workloads.

Competitors—and partners in some cases—including Intel, Nvidia, AWS, and AMD, also have plans to more tightly integrate DPU-based devices into in firewalls (firewall definition), gateways, enterprise load balancing, and storage-offload applications.

For VMware’s part, its most recent DPU moves are part of a strategy to ensure that networking and security are a priority going forward.

vSphere accommodates underlying processors

These include support for DPUs under the company’s flagship vSphere 8 virtualization and vSAN hyperconverged software packages. The idea is that vSphere is going to be the foundation for deploying and managing workloads and running them effectively and securely regardless of what the underlying processor technology is, said Tom Gillis, senior vice president and general manager at VMware. In the end, reduced CPU and memory overhead will lead to more efficient workload consolidation and better infrastructure performance, he said.

“When customers use a DPU to offload computing they save 10-to-20% of their server cores, so that’s the economic argument for using DPUs because in a high-density server environment, the higher your density, the more efficient the DPU becomes, but that’s just the beginning,” Gillis said.

Under vSphere 8, another feature known as DPU-based Acceleration for NSX can move networking, load balancing, and security functions to a DPU, freeing up server CPU capacity. The system would support distributed firewalls on the DPU, amplifying the security architecture without requiring software agents. The NSX acceleration came out of a VMware development effort with Nvidia, Pensando (now part of AMD), and Intel  called Project Monterey.

“Since we’re running ESX and NSX, security, and everything in the NIC, we could run everything on a bare metal server. That would let customers set these systems up in front of giant databases or Postgre servers that handle tons of traffic and require high-level security without impacting their server environment,” Gillis said.

In addition to the vSphere and network-acceleration features, VMware also announced that the AMD Pensando Distributed Services Card will be one of the first DPU-based accelerators to support VMware vSphere 8. The Pensando card includes intelligent, programmable software to support software-defined cloud, compute, networking, storage, and security services that could be rolled out in edge, colocation, or service-provider networks.

For the enterprise, Pensando could serve three primary use cases: native security in the fabric for east-west traffic, stateful IPsec NAT functions for colocation, and provide real-time visibility and telemetry of smart-switch environments, said Soni Jiandani, co-founder and chief business officer with Pensando. Jiandani said she expects more use cases as more smart switches are deployed and more enterprises look to bring a public-cloud experience to on prem environments.

Security and DPUs

DPUs will open a variety of networking and security options, IDC wrote in a recent white paper about VMware and its DPU strategy:

“In the foreseeable future, a collection of DPUs, enabled by VMware’s Project Monterey, running in servers could create a unified data-center backplane. This approach could offer a consistent software-defined but hardware-controlled security and monitoring network across the entire datacenter for configuring, deploying, and managing bare metal, virtualized, and containerized workloads. It could provide a consistent and simple but no-compromises operations environment that limits the ability for ‘nonoperator-approved entities’ (humans or applications) to access the control environment and thus limit the impact of exploits on low-level hardware vulnerabilities.”

Others see great enterprise potential for the DPU systems overall.

“This is one of the most important announcements for the data center in the last five years,” said Alan Weckel, founding technology analyst for the 650 Group. “VMware is bringing cloud-level virtualization to the enterprise, and it works well with the next-generation processors from AMD and Intel, increasing core [efficiency] significantly. Enterprises can now use the DPU to offload all the virtualization layer and use all cores on the CPU for workloads.”

In addition, major enterprises with a large VMware presence will embrace the DPU to gain server efficiency, Weckel said. “At a minimum, there will be cost savings as the DPU saves CPU cores. Most enterprises will be able to deploy a more hybrid and robust compute environment, better keeping pace with the innovations occurring on the cloud side,” Weckel said.

Ultimately the data center becomes a grid of servers with smart NICs, Gillis said. Any of them could be running a bare-metal Postgre server, “or then I can reconfigure it to run a patch server, or I can have it be 100-Gig firewall or high-performance load balancer,” he said. “You now have the cloud operating model—that’s what Amazon does. So what’s cool is that the technology and the architecture that started in the hyperscalers is moving into the enterprise.”

DPUs will also bring challenges, including bringing the accelerators to more than just the best resourced enterprises.  “The savings are there, but you need a robust and talented IT staff to implement a DPU. With the current labor shortage, not all enterprises will be able to implement the technology, and it will take a while for VARs and third parties to come up to speed,” Weckel said.

Leave a Reply

Your email address will not be published. Required fields are marked *