With the tagline: “Bringing intelligence to storage” start-up NGD Systems, formerly known as NexGen Data, has announced a 24 terabyte SSD that the company claims to be the highest-capacity PCIe/NVMe device available.
The read-optimized Catalina SSD employs a lot of proprietary NGD technology: Variable rate LDPC error correction, unique DSP (digital signal processing) algorithms, and an “Elastic” flash transition layer (FTL), all embodied in an NGD-proprietary controller. This proprietary technology allows Catalina to offer enterprise performance and reliability while using TLC flash and less DRAM than other designs.
NGD claims that the product is already shipping and is being qualified by major OEMs.
Based on some of the company’s presentations at past years’ Flash Memory Summits the controller has been carefully balanced to optimize cost, throughput, and heat. This last is a bigger problem than most folks would imagine. At the 2013 Hot Chips conference a former Violin Memory engineering manager told the audience that, if he had it to do all over again, he would have focused first on thermal management. Some SSDs throttle their speed based on the SSD’s internal temperature. NAND flash writes and erases consume significant power.
NGD claims that this SSD’s power consumption of 0.65W/TB is the lowest of any SSD. This may have a lot to do with the fact that the SSD has been designed for high-read workloads.
But that’s not what The SSD Guy finds most interesting about NGD.
The introduction of the Catalina is just one step towards NGD’s ultimate goal of producing devices that offer “In-Situ Processing”. This is a new approach to computing in which a processor within the SSD (or other storage device) performs certain highly-repetitive tasks right where the data is stored, rather than requiring the system to move the data to a server to be processed, then moving it back into storage. This radical departure from existing architectures offers a few very significant advantages:
- Processing is faster since there’s less data movement over networks and SSD interfaces
- Less power is consumed in data transport
- Systems with multiple smart storage devices can process several data sets in parallel to further improve performance
- There is less bus traffic, allowing other processes to perform faster
This approach appears to be gaining momentum, and is often mentioned in presentations at industry events.
Will it succeed? It’s still pretty early to tell, but there have been a few false starts over the past decade with companies like Virident and Schooner failing to win acceptance of their dedicated appliances. The performance of such systems, though, appears to be pretty compelling. I am guessing that, after some initial hesitation, in-situ processing will eventually win out.