Sometimes it’s enlightening to compare several viewpoints on similar data. At yesterday’s SNIA Persistent Memory Summit a number of presentations provided interesting overlapping views on certain subjects.
One of particular interest to The SSD Guy was latency vs. IOPS. Tom Coughlin of Coughlin Associates and I presented the findings from our recently-published IOPS survey report and in Slide 19 displayed the basic chart behind this post’s graphic (click to enlarge, or, better yet, right-click to open in a new tab). This chart compares how many IOPS our respondents said they need for the storage in their most important application, and compared that to the latency they required from this storage. For comparison’s sake we added a reference column on the left to roughly illustrate the latency of various standard forms of storage and memory.
You can see that we received a great variety of inputs spanning a very wide range of IOPS and latency needs, and that these didn’t all line up neatly as we would have anticipated. One failing of this chart format is that it doesn’t account for multiple replies for the same IOPS/latency combination: If we had been able to include that the chart would have shown a clearer trendline running from the top left to the lower right. Instead we have a band that broadly follows that trend of upper-left to lower-right.
Two other speakers presented the IOPS and latency that could be Continue reading
On January 12 IBM announced some very serious upgrades to its DS8000 series of storage arrays. Until this announcement only the top-of-the-line model, the IBM System Storage DS8888, was all-flash while the less expensive DS8886 and DS8884 sported a hybrid flash + HDD approach. The new models of the DS8886 and DS8884 are now also all-flash.
But that’s not all: Every model in this product family has been upgraded.
The original DS8000 systems used a module called the High Performance Flash Enclosure (HPFE) for any flash they included, while these newer models are all based on HPFE Gen 2. While the original HPFE was limited to a maximum capacity of 24TB in a 1U space, the larger 4U HPFE Gen 2 can be configured with as much as 153.6 TB, for more than six times the storage of the previous generation. By making this change, and by optimizing the data path, the Gen 2 nearly doubles read IOPS to 500K and more than triples read bandwidth to 14GB/s. Write IOPS in the Gen 2 have been increased 50% to 300K, while write bandwidth has been increased by nearly 4x to 10.5GB/s .
This kind of performance opens new Continue reading
Micron has announced a new line of Enterprise SSDs that it has named the 5100 family. The three members of the family are designated by different suffixes: 5100 ECO, 5100 PRO, and 5100 MAX, as listed in the table below.
The three models support the same maximum read IOPS performance, but have a wide range of write IOPS figures, endurance (measured in DWPD = Drive Writes per Day), and maximum capacities.
All of these SSDs are based on Micron’s 3-bit 3D NAND. Micron has been aggressively ramping its 3D NAND technology since it began shipments in earnest last June.
The three SSD models are designed using the same fundamental firmware architecture, which Micron has named FlexPro, to yield consistent performance and reliability across the family, and with the hopes that customers will be able to qualify all three models in a single effort, which would provide one more reason for users to source their Continue reading
Yesterday IBM unveiled a sweeping update of its existing flash storage products. These updates cover a range of products, including IBM Storwize All Flash arrays: V7000F, V7000 Gen2+, and V5030F, the FlashSystem V9000, the IBM SAN Volume Controller (SVC), and IBM’s Spectrum Virtualize Software.
The company referred to this effort as a part of a: “Drumbeat of flash storage announcements.” IBM has a stated goal of providing its clients with: “The right flash for the right performance at the right price.”
IBM’s representatives explained that the updates were made possible by the fact that the prices of flash components have been dropping at a rapid pace while reliability is on the rise. The SSD Guy couldn’t agree more.
Here’s what IBM announced:
Starting from the low end and moving up, the V5030F entry-level/midrange array is an Continue reading
This is an update of the same survey we ran in 2012. We want to see how things have changed over the past four years.
Please click HERE and let us know what kind of storage performance you need. Even a hunch is good.
Let me back up a little – they’re not really slow. When Intel compared its standard NAND flash based PCIe SSD to a similar SSD based on 3D XPoint memory, the XPoint model ran 7-8 times faster, which is very impressive. Intel demonstrated that at the Intel Developer Forum (IDF) last August and several times since then.
But Intel and Micron have been boasting since its introduction that 3D XPoint Memory is 1,000 times as fast as NAND flash. How do you get from a 1,000 times speed advantage down to a speed improvement of only 7-8 times?
That’s what the graphic in this post will explain. The small rendition above is just Continue reading
Some time ago Objective Analysis ran nearly 300 standard benchmarks on a PC with varying amounts of flash and DRAM and found that a dollar’s worth of flash provided a greater performance boost than a dollar’s worth of DRAM once the DRAM size grew above a certain minimum (1-2GB) depending on the benchmark.
You might wonder how this could possibly be true. Everyone knows that best way to improve any computing system’s performance is to add DRAM main memory. How could flash, which is orders of magnitude slower than DRAM, provide a bigger performance boost than DRAM?
It all makes sense if you think of the DRAM of something that is there only to make the HDD look faster. More is better, but if you can use a little less DRAM and add a large flash memory layer then disk accesses appear to speed up even more.
The benchmark data and the price/performance findings that are Continue reading
A couple of specifications for SSD endurance are in common use today: Terabytes Written (TBW) and Drive Writes Per Day (DWPD). Both are different ways to express the same thing. It seems that one vendor will specify endurance using TBW, while another will specify DWPD. How do you compare the two?
First, some definitions. “Terabytes Written” is the total amount of data that can be written into an SSD before it is likely to fail. “Drive Writes Per Day” tells how many times you can overwrite the entire capacity of the SSD every single day of its usable life without failure during the warranty period. Since both of these are guaranteed specifications, then your drive is most likely to last a lot longer than the number given by the SSD’s maker.
To convert between the two you must know the disk’s capacity and the warranty period. If drive maker gives you TBW but you want to know DWPD you would approach it Continue reading
From time to time IT managers ask The SSD Guy if there’s an easy way to compare SSDs made with MLC flash against those made using eMLC flash. Most folks understand that eMLC flash is a less costly alternative to SLC flash, both of which provide longer wear than standard MLC flash, but not everyone realizes that eMLC’s superior endurance comes at the cost of slower write speed. By writing to the flash more gently the technology can be made to last considerably longer.
So how do you compare the two? OCZ introduced MLC and eMLC versions of the same SSD this week, and this provides a beautiful opportunity to explore the difference.
As you would expect, the read parameters are all identical. This stands to reason, since Continue reading
This replacement for the company’s Z-Drive 4000 series is a complete redesign with an obsession for performance. OCZ tells me that they moved from a 2-hop design to a 1-hop by using the PMC Princeton PCIe SSD controller, and have passed the University of New Hampshire Interoperability Labs’ compliance tests to NVMe 1.1B compliance.
But how does it perform? Well the 1-hop design helps reduce latency (which is just starting to overshadow IOPS in users’ minds) and the latency of this SSD is significantly lower than competing NVMe SSDs: between 25-30μs, figures that OCZ tells me are very consistent, a big plus for enterprise applications. As for IOPS, the device can perform under a 70/30 Read/Write load at 330K.
The 6000 series is provided in both standard MLC and eMLC for those who want the security of eMLC and are willing to sacrifice a little performance to sleep better at night.
This product is a good fit for the market needs, and shows how devoted OCZ and its parent Toshiba are to providing high performance in the SSD marketplace.