I have been receiving questions lately from people who are puzzled when companies use different parameters than their competitors use to specify the endurance of their SSDs. How do you compare one against the other? Some companies even switch from one parameter to another to define the endurance of different SSDs within their product line.
I have found that Intel uses three different endurance measures for its products: DWPD (drive writes per day), TBW (terabytes written), and GB/day.
There’s not any real difference between any of these measures – each one is one way of stating how many times each of the SSD’s locations can be overwritten before the drive has gone past its warrantied life.
The relationships between these three measures are illustrated in this post’s graphic. You can click on it to see an expanded version. It’s all pretty simple. We’ll spell out the relationships in detail below, but in brief, if you want to compare Continue reading
A new and highly-efficient error correction scheme has recently been revealed by a joint university research team. The SSD Guy has learned that this largely-overlooked research, performed by a cross-university team from University of North by Northeast Wales in the UK (UN-NeW) and Poland’s Trzetrzelewska University, could bring great economies to SSD manufacturers and all-flash array (AFA) companies.
Dr. Peter Llanfairpullguryngyllgogeryohuryrndrodullllantysiliogogogoch of UN-NeW, who generally shortens his name to Llanfairpullguryngyll and Dr. Agnieszka Włotrzewiszczykowycki of Trzetrzelewska University have determined that today’s more standard ECC engines can be dramatically improved upon to both increase available storage for a given price while accelerating throughput. This is achieved through the use of new and highly complex algorithms that differ radically from current ECC approaches that are simply linear improvements upon past algorithms.
According to Dr. Włotrzewiszczykowycki: “The beauty of semiconductors is that Moore’s Law not only allows Continue reading
Only a week after announcing its Optane Enterprise SSDs Intel has launched m.2-format Optane SSDs for end users. It appears that we are at the onset of an Optane surge.
These SSDs communicate over the PCIe bus bringing more of the 3D XPoint’s performance to the user than would a SATA interface.
Pricing is $44 for a 16GB module and $77 for 32GB. That’s $2.75 and $2.40 (respectively) per gigabyte, or about half the price of DRAM. Intel says that these products will ship on April 24.
What’s most interesting about Intel’s Optane pitch is that the company appears to be telling the world that SSDs are no longer important with its use of the slogan: “Get the speed, keep the capacity.” This message is designed to directly address the quandary that faces PC buyers when considering an SSD: Do they want an SSD’s speed so much that they are willing to accept either Continue reading
With the tagline: “Bringing intelligence to storage” start-up NGD Systems, formerly known as NexGen Data, has announced a 24 terabyte SSD that the company claims to be the highest-capacity PCIe/NVMe device available.
The read-optimized Catalina SSD employs a lot of proprietary NGD technology: Variable rate LDPC error correction, unique DSP (digital signal processing) algorithms, and an “Elastic” flash transition layer (FTL), all embodied in an NGD-proprietary controller. This proprietary technology allows Catalina to offer enterprise performance and reliability while using TLC flash and less DRAM than other designs.
NGD claims that the product is already shipping and is being qualified by major OEMs.
Based on some of the company’s presentations at past years’ Flash Memory Summits the controller has been carefully balanced to optimize cost, throughput, and heat. This last is a bigger problem than most folks would imagine. At the 2013 Hot Chips conference a former Violin Memory engineering manager told the audience Continue reading
This week Intel announced the Optane SSD DC P4800X Series, new enterprise SSDs based on the company’s 3D XPoint memory technology which Intel says is the first new memory technology to be introduced since 1989. The technology was introduced to fill a price/performance gap that might impede Intel’s sales of high-performance CPUs.
Intel was all aglow with the promise of performance, claiming that the newly-released SSDs offer: “Consistently amazing response time under load.”
Since the early 1990s Intel has realized that it needs for the platform’s performance to keep pace with the ongoing performance increases of its new processors. A slow platform will limit the performance of any processor, and if customers don’t see any benefit from purchasing a more expensive processor, then Intel will be unable to keep its processor prices high.
Recently NAND flash SSDs have helped Intel to improve the platform’s speed, as did the earlier migration of Continue reading
SSDs use a huge number of internal parameters to achieve a tricky balance between performance, wear, and cost. The SSD Guy likes to compare this to a recording studio console like the one in this post’s graphic to emphasize just how tricky it is for SSD designers to find the right balance. Imagine trying to manage all of those knobs! (The picture is JacoTen’s Wikipedia photo of a Focusrite console.)
Vendors who produce differentiated SSDs pride themselves in their ability to fine-tune these parameters to achieve better performance or endurance than competing products.
About a year ago I suggested to the folks at NVMdurance that they might consider applying their machine learning algorithm to this problem. (The original NVMdurance product line was described in a Memory Guy post a while ago.) After all, the company makes a machine learning engine that tunes the numerous internal parameters of a NAND flash chip to extend the chip’s life while maintaining the specified performance. SSD management would be a natural use of machine learning since both SSDs and NAND flash chips currently use difficult and time-consuming manual processes to find the best mix of parameters to drive the design.
Little did I know that NVMdurance’s researchers Continue reading
Sometimes it’s enlightening to compare several viewpoints on similar data. At yesterday’s SNIA Persistent Memory Summit a number of presentations provided interesting overlapping views on certain subjects.
One of particular interest to The SSD Guy was latency vs. IOPS. Tom Coughlin of Coughlin Associates and I presented the findings from our recently-published IOPS survey report and in Slide 19 displayed the basic chart behind this post’s graphic (click to enlarge, or, better yet, right-click to open in a new tab). This chart compares how many IOPS our respondents said they need for the storage in their most important application, and compared that to the latency they required from this storage. For comparison’s sake we added a reference column on the left to roughly illustrate the latency of various standard forms of storage and memory.
You can see that we received a great variety of inputs spanning a very wide range of IOPS and latency needs, and that these didn’t all line up neatly as we would have anticipated. One failing of this chart format is that it doesn’t account for multiple replies for the same IOPS/latency combination: If we had been able to include that the chart would have shown a clearer trendline running from the top left to the lower right. Instead we have a band that broadly follows that trend of upper-left to lower-right.
Two other speakers presented the IOPS and latency that could be Continue reading
On January 12 IBM announced some very serious upgrades to its DS8000 series of storage arrays. Until this announcement only the top-of-the-line model, the IBM System Storage DS8888, was all-flash while the less expensive DS8886 and DS8884 sported a hybrid flash + HDD approach. The new models of the DS8886 and DS8884 are now also all-flash.
But that’s not all: Every model in this product family has been upgraded.
The original DS8000 systems used a module called the High Performance Flash Enclosure (HPFE) for any flash they included, while these newer models are all based on HPFE Gen 2. While the original HPFE was limited to a maximum capacity of 24TB in a 1U space, the larger 4U HPFE Gen 2 can be configured with as much as 153.6 TB, for more than six times the storage of the previous generation. By making this change, and by optimizing the data path, the Gen 2 nearly doubles read IOPS to 500K and more than triples read bandwidth to 14GB/s. Write IOPS in the Gen 2 have been increased 50% to 300K, while write bandwidth has been increased by nearly 4x to 10.5GB/s .
This kind of performance opens new Continue reading
Micron has announced a new line of Enterprise SSDs that it has named the 5100 family. The three members of the family are designated by different suffixes: 5100 ECO, 5100 PRO, and 5100 MAX, as listed in the table below.
The three models support the same maximum read IOPS performance, but have a wide range of write IOPS figures, endurance (measured in DWPD = Drive Writes per Day), and maximum capacities.
All of these SSDs are based on Micron’s 3-bit 3D NAND. Micron has been aggressively ramping its 3D NAND technology since it began shipments in earnest last June.
The three SSD models are designed using the same fundamental firmware architecture, which Micron has named FlexPro, to yield consistent performance and reliability across the family, and with the hopes that customers will be able to qualify all three models in a single effort, which would provide one more reason for users to source their Continue reading
Yesterday IBM unveiled a sweeping update of its existing flash storage products. These updates cover a range of products, including IBM Storwize All Flash arrays: V7000F, V7000 Gen2+, and V5030F, the FlashSystem V9000, the IBM SAN Volume Controller (SVC), and IBM’s Spectrum Virtualize Software.
The company referred to this effort as a part of a: “Drumbeat of flash storage announcements.” IBM has a stated goal of providing its clients with: “The right flash for the right performance at the right price.”
IBM’s representatives explained that the updates were made possible by the fact that the prices of flash components have been dropping at a rapid pace while reliability is on the rise. The SSD Guy couldn’t agree more.
Here’s what IBM announced:
Starting from the low end and moving up, the V5030F entry-level/midrange array is an Continue reading