infoTECH Feature

December 11, 2019

How the RDMA Standards Battle Was Won

By Special Guest
John Kim, Director of Storage Marketing, Mellanox Technologies

Remote Direct Memory Access (RDMA), the ability to access memory on a remote machine without interrupting the work of its CPU, has many important uses. By allowing server-to-server data movement directly between application memory, it has a useful role in the data centre and high-performance computing (HPC) environments, as well as in the management of Big Data and the smooth running of today’s complex cloud models. Essentially, RDMA is an excellent way for enterprises to boost the performance of critical applications without the involvement of the network software stack.

So far, so simple. Until recently, however, there was doubt about which of several competing standards in this field would emerge victoriously. It looked, on the surface at least, as an even contest between RoCE versus iWarp on the Ethernet side, and InfiniBand versus OmniPath Architecture on the high-performance computing (HPC) front.

RoCE, or RDMA over Converged Ethernet, is a standard that enables the passage of RDMA traffic over an Ethernet network. iWarp is a rival standard that also allows an application to read or write data from or to the memory of another application. Meanwhile, the veteran InfiniBand protocol, which allows the rapid movement of data in an HPC environment, has been facing a challenge from OmniPath, Intel’s (News - Alert) version of the same.

“Vendors and customers would regularly ask me which way the contest was going, and which of the competing standards had the best technical merits and vendor support,” says John Kim, Director of Storage Marketing at Mellanox Technologies, a vendor of end-to-end Ethernet and InfiniBand interconnect solutions.

By the middle of 2019, he says, those customers had their answer. By that stage, it had become clear that RoCE was now the de-facto standard for Ethernet RDMA, and that InfiniBand had survived as the standard of choice in the HPC arena. The Battle Royale was over.

“All new high-performance network cards now support RoCE, and nobody is talking seriously about other RDMA options for Ethernet,” claims Kim.

But how did this happen? How did such a clear winner emerge from what many saw as evenly matched options?

Among the reasons, no doubt, are RoCE’s strengths from a technology standpoint. Designed from the ground up to be efficient, scalable, and flexible enough to run on different types of Ethernet network, it delivers lower latency and lowers CPU utilization than other Ethernet RDMA options. With the right adapters, RoCE can be run on any Ethernet network and any switch without changes.

RoCE runs on top of IP but using an efficient protocol stack without the overhead of the TCP stack and without the awkward use of the TCP offload engines typically required to do RDMA over TCP. That is one of the great disadvantages of the other Ethernet RDMA standard. iWarp relies on TCP, which is slow and has unpredictable, highly variable latency when running in the kernel.

Since 2013, RoCE has also enjoyed the most robust adoption amongst cloud customers deploying RDMA. Microsoft (News - Alert) Azure, Baidu, and Alibaba use RoCE. And it also arguably enjoys the most reliable support within the vendor community.

“All major NIC (News - Alert) vendors feel it’s crucial to support RoCE, and only a few bother supporting any other method,” says Kim. “All the newer NICs, SmartNICs, and Ethernet SSD solutions that enable RDMA support only RoCE. This is reflective of both the broad customer adoption and the technological superiority of RoCE. The support from so many network vendors ensures customers have a diverse choice of hardware and are secure in the knowledge that RoCE is the most broadly-supported RDMA solution.”

In addition to end-users and vendors adopting RoCE, several vital applications also rely on it. RoCE software APIs are compatible with InfiniBand, the original granddaddy of RDMA standards, so it’s easy for any storage, database, and AI/ML solutions that started out supporting InfiniBand to add support for RoCE as well quickly.

Microsoft, VMware, Oracle and Red Hat (News - Alert) all support it, as do flash storage vendors such as Datera, Excelero, IBM, Lightbits Labs, NetApp, Pavilion Data, Plexistor, Pure, StarWind Software, Toshiba Memory, now renamed Kioxia, Vast Data, Weka-IO, Western Digital (News - Alert), Zadara Systems, and others.

Lastly, the growth of artificial intelligence (AI) and machine learning (ML) are also accelerating the adoption of RoCE. AI and ML workloads are most often distributed and require low-latency access between the server nodes and from servers to storage. They also want to save as much of the CPU power as possible to run the actual applications. This means they don’t want networking ever to be the bottleneck and they want the network to offload as much of the data movement work as possible, so they don’t burn valuable CPU and GPU cycles on it.

Edited by Ken Briodagh

Subscribe to InfoTECH Spotlight eNews

InfoTECH Spotlight eNews delivers the latest news impacting technology in the IT industry each week. Sign up to receive FREE breaking news today!
FREE eNewsletter

infoTECH Whitepapers