I can already hear readers saying: “Wait! You can’t do that!” Well, you’re right, but the new module comes awfully close to that by putting the NAND behind an ASIC that interfaces between the DDR3 bus and the NAND.
Why do this? Quite simply because you can get more “Bang for the Buck” by adding NAND to the system once you’ve reached a certain DRAM size. The Diablo “Memory Channel Storage” (MCS) approach supports the addition of terabytes of NAND at the loss of one socket’s worth of DRAM, and, in many cases, the system with less DRAM and more NAND will outperform a pure DRAM system.
A number of system administrators have already discovered this fact, and use SSDs to reduce their systems’ DRAM usage.
By moving the NAND onto the memory bus, rather than putting it behind an HDD interface, more of the NAND’s performance is made available to the system. Plus, accesses no longer need to go through the software’s slow storage stack.
But what about NAND’s erase-before-write, and its page write (rather than byte write), and its slow write speeds, and its wear mechanism? Yes, all of these pose problems, but Diablo has addressed them with the ASIC, which buffers writes and manages the flash, along with a special utility that prevents the flash from being accessed the same way as DRAM even though it shares the DRAM’s bus. The NAND is managed as a new memory layer, or as storage.
It’s a clever approach, and one that Objective Analysis anticipated when we wrote our report in 2010: How PC NAND will Undermine DRAM (which can be purchased for immediate download from the Objective Analysis website). The Diablo DIMM is aimed at servers rather than PCs, but the argument is the same for all of computing – at some point adding NAND provides more performance per dollar than adding DRAM. Furthermore, if you add a very large NAND the benefits multiply. Diablo points to one installation in which 25,000 servers with DRAM alone were replaced with 5,000 servers based on a combination of NAND and DRAM DDR3 DIMMs. Who wouldn’t like that?