WDC’s Top-to-Bottom Storage Fabric Approach

Small cropped version of the chassis photo further down in the blog.Western Digital launched a storage fabric architecture in August that aims to address certain issues that occur with existing storage systems.  WDC is furthermore taking a number of steps to try to promote this architecture’s adoption into mainstream storage.  It’s an interesting approach that The SSD Guy thought was worth discussion.

The company is working to address the problem of stranded storage resources in the data center.  Conventional hyperconverged infrastructure systems (HCI) suffer from this issue and that leads to low resource utilization thanks to excessive overprovisioning, which not only is costly, but it also wastes energy.  The standard way to resolve this is to move to a disaggregated system with shared storage over a SAS interface.  While this provides an improvement, it still comes with a number of restraints.  Servers communicate with shared storage (SSD or HDD) via SAS, which limits the network to six devices, and requires for storage to have a hard assignment to a server.  Part of this is due to the use of the SAS interface, and a move to NVMe could loosen these restrictions.  SAS also has a more limited address range than NVMe.

Although SAS-based disaggregation has a lower level of stranded resources than HCI, there is still insufficient flexibility to avoid stranding altogether.  SAS is also approaching the end of its life, with no new upgrades planned beyond SAS 4.

Solution: Composable Disaggregated Infrastructure

A composable disaggregated infrastructure (CDI) gets past these issues by working through the NVMe-over-Ethernet protocol rather than SAS.  This frees up the limits on the number of devices connected to the network and is more scalable and provides a lot more flexibility than its SAS-based counterpart.

Currently-available systems that implement this architecture use off-the-shelf DPUs or HBAs with custom firmware to run the protocol.  While that’s certainly a big step in the right direction, the firmware creates delays.

Photo of PCIe RapidFlex C2000 bridge card

WDC introduced its own solution, the RapidFlex C2000 Ethernet to PCIe bridge adapter, photographed above, which is based on a custom ASIC (below), called the RapidFlex A2000, which performs all these tasks in a hardwired state machine, rather than in firmware.  Not only does this accelerate the interface by reducing latency, but it also consumes much less power than a programmable DPU.Photo of packaged RapidFlex A2000 ASIC  The new RapidFlex supports sixteen lanes of PCIe 4.0.  On the network side, its dual-port 100Gb/s Ethernet supports up to 256 network clients.  It’s a good bandwidth match, because the speed of two 100Gb/s Internet channels is equivalent to sixteen lanes of PCIe 4.0.

The RapidFlex design covers both ends of the interface, so a C2000 card can be installed in both the initiator (server) and the target (storage).  With a C2000 at either end of the channel there’s no need to test for interoperability issues either.  Plus, if the SAN is built out of NVMe SSDs then the entire data transfer can occur in a single protocol to bypass any delays from a protocol conversion like the NVMe-SAS one that would be required when communicating with a SAS-based SAN.

WDC’s C2000-Based Storage Array

The company has put together a high-availability storage array that takes advantage of this.  Called the OpenFlex Data24, this JBOF system consists of six RapidFlex C2000 boards communicating with 24 of a new PCIe 4.0 dual-port NVMe SSD that WDC calls the Ultrastar DC SN655.  100Gb/s Ethernet is used to communicate either directly or through a switch, using either the RoCE v2 or TCP protocol.

OpenFlex Data24 Array, a 2U chassis with 24 SSDs showing in the front.

A Plan to Penetrate

Western Digital is putting in the effort to get the RapidFlex system speedily adopted by offering it in several different ways.  Not only does the company sell the OpenFlex Data24 storage array and the C2000 bridge, but they also sell the A2000 chip and will license the A2000 IP to companies that wish to incorporate it into their own SSDs, even if they compete against WDC.  Furthermore, WDC is using standards and the open environment to solicit partners in their campaign.  They set up an Open Composability Lab in Colorado Springs, Colorado, to help accelerate the architecture’s adoption.

WDC says that it wants to be the NVMe-oF ecosystem leader.  The company’s certainly covering all the bases, so it will be interesting to see whether it is able to achieve the level of adoption that it intends to.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.