SSDs use a huge number of internal parameters to achieve a tricky balance between performance, wear, and cost. The SSD Guy likes to compare this to a recording studio console like the one in this post’s graphic to emphasize just how tricky it is for SSD designers to find the right balance. Imagine trying to manage all of those knobs! (The picture is JacoTen’s Wikipedia photo of a Focusrite console.)
Vendors who produce differentiated SSDs pride themselves in their ability to fine-tune these parameters to achieve better performance or endurance than competing products.
About a year ago I suggested to the folks at NVMdurance that they might consider applying their machine learning algorithm to this problem. (The original NVMdurance product line was described in a Memory Guy post a while ago.) After all, the company makes a machine learning engine that tunes the numerous internal parameters of a NAND flash chip to extend the chip’s life while maintaining the specified performance. SSD management would be a natural use of machine learning since both SSDs and NAND flash chips currently use difficult and time-consuming manual processes to find the best mix of parameters to drive the design.
Little did I know that NVMdurance’s researchers Continue reading “Managing SSDs Using Machine Learning”
Sometimes it’s enlightening to compare several viewpoints on similar data. At yesterday’s SNIA Persistent Memory Summit a number of presentations provided interesting overlapping views on certain subjects.
One of particular interest to The SSD Guy was latency vs. IOPS. Tom Coughlin of Coughlin Associates and I presented the findings from our recently-published IOPS survey report and in Slide 19 displayed the basic chart behind this post’s graphic (click to enlarge, or, better yet, right-click to open in a new tab). This chart compares how many IOPS our respondents said they need for the storage in their most important application, and compared that to the latency they required from this storage. For comparison’s sake we added a reference column on the left to roughly illustrate the latency of various standard forms of storage and memory.
You can see that we received a great variety of inputs spanning a very wide range of IOPS and latency needs, and that these didn’t all line up neatly as we would have anticipated. One failing of this chart format is that it doesn’t account for multiple replies for the same IOPS/latency combination: If we had been able to include that the chart would have shown a clearer trendline running from the top left to the lower right. Instead we have a band that broadly follows that trend of upper-left to lower-right.
Two other speakers presented the IOPS and latency that could be Continue reading “Latency, IOPS & NVDIMMs”
On January 12 IBM announced some very serious upgrades to its DS8000 series of storage arrays. Until this announcement only the top-of-the-line model, the IBM System Storage DS8888, was all-flash while the less expensive DS8886 and DS8884 sported a hybrid flash + HDD approach. The new models of the DS8886 and DS8884 are now also all-flash.
But that’s not all: Every model in this product family has been upgraded.
The original DS8000 systems used a module called the High Performance Flash Enclosure (HPFE) for any flash they included, while these newer models are all based on HPFE Gen 2. While the original HPFE was limited to a maximum capacity of 24TB in a 1U space, the larger 4U HPFE Gen 2 can be configured with as much as 153.6 TB, for more than six times the storage of the previous generation. By making this change, and by optimizing the data path, the Gen 2 nearly doubles read IOPS to 500K and more than triples read bandwidth to 14GB/s. Write IOPS in the Gen 2 have been increased 50% to 300K, while write bandwidth has been increased by nearly 4x to 10.5GB/s .
This kind of performance opens new Continue reading “IBM Upgrades DS8000 Series: All Models are now All-Flash”