A few years ago The SSD Guy posted an analogy that Intel’s Jim Pappas uses to illustrate the latency differences between DRAM, an SSD, and an HDD. If we look at DRAM latency to be a single heartbeat, then what happens when we scale that timing up to represent SSDs and HDDs? How many heartbeats would it take to access either one, and what could you do in that time?
I still think it’s a pretty interesting way to make all these latency differences easier to understand.
Just recently I learned of a Rich Report video of a 2015 presentation in which Micron’s Ryan Baxter uses a different and equally interesting analogy based on tomatoes.
Tomatoes aren’t the first thing that comes to my mind when I think about SSDs, but this video may change my way of thinking!
The tomato slide, 9:30 into the presentation, is Continue reading “Comparing SSDs to Tomatoes”
My friend and associate Eden Kim of Calypso Systems has published a new white paper on real workloads for SSDs.
This is the company that has helped the Storage Networking Industry Association (SNIA) to develop performance tests for SSDs that get past the issues that plague SSD users: Yes, it does well when it’s new, but how will an SSD perform after a year or two of service?
Calypso has recently published a new White Paper entitled: Datacenter Server Real World Workloads. This document analyzes real-life datacenter server workloads and performance to provide important insight into how an SSD might perform in actual environments rather than in synthesized workloads. It compares data center class SSDs against SAS HDDs to take a lot of the guessing out of issues about IOPS requirements, endurance needs, and so forth by comparing the measured activity over 24 hours of a 2,000-outlet retail chain web portal running SQL.
The tests in the paper represent a Continue reading “Getting the Most from Data Center SSDs”
Start-up NGD Systems (formerly NxGenData) has just announced the availability of an SSD with in situ processing – that is, the SSD can actually process data rather than simply store it. The new “Catalina 2” SSD is said to have the ability to run advanced applications directly on the drive.
NGD tells us that the SSD, which comes in both U.2 and AIC (PCIe add-in card) formats, is currently available for purchase.
If your memory is long enough you may recall that The SSD Guy wrote a post four years ago about something like this. At the 2013 Flash Memory Summit Micron Technology delivered a keynote detailing a research project in which they reprogrammed SSDs so that each SSD in a system could perform basic database management functions.
Although Micron demonstrated significant advantages of using of this approach, nobody, not even Micron, has followed through with a product until now.
NGD briefed me and explained that the data explosion expected with the Internet of Things will not Continue reading “NGD’s New “In-Situ Processing” SSD”
I have been receiving questions lately from people who are puzzled when companies use different parameters than their competitors use to specify the endurance of their SSDs. How do you compare one against the other? Some companies even switch from one parameter to another to define the endurance of different SSDs within their product line.
I have found that Intel uses three different endurance measures for its products: DWPD (drive writes per day), TBW (terabytes written), and GB/day.
There’s not any real difference between any of these measures – each one is one way of stating how many times each of the SSD’s locations can be overwritten before the drive has gone past its warrantied life.
The relationships between these three measures are illustrated in this post’s graphic. You can click on it to see an expanded version. It’s all pretty simple. We’ll spell out the relationships in detail below, but in brief, if you want to compare Continue reading “Comparing Wear Figures on SSDs”
SSDs use a huge number of internal parameters to achieve a tricky balance between performance, wear, and cost. The SSD Guy likes to compare this to a recording studio console like the one in this post’s graphic to emphasize just how tricky it is for SSD designers to find the right balance. Imagine trying to manage all of those knobs! (The picture is JacoTen’s Wikipedia photo of a Focusrite console.)
Vendors who produce differentiated SSDs pride themselves in their ability to fine-tune these parameters to achieve better performance or endurance than competing products.
About a year ago I suggested to the folks at NVMdurance that they might consider applying their machine learning algorithm to this problem. (The original NVMdurance product line was described in a Memory Guy post a while ago.) After all, the company makes a machine learning engine that tunes the numerous internal parameters of a NAND flash chip to extend the chip’s life while maintaining the specified performance. SSD management would be a natural use of machine learning since both SSDs and NAND flash chips currently use difficult and time-consuming manual processes to find the best mix of parameters to drive the design.
Little did I know that NVMdurance’s researchers Continue reading “Managing SSDs Using Machine Learning”
Sometimes it’s enlightening to compare several viewpoints on similar data. At yesterday’s SNIA Persistent Memory Summit a number of presentations provided interesting overlapping views on certain subjects.
One of particular interest to The SSD Guy was latency vs. IOPS. Tom Coughlin of Coughlin Associates and I presented the findings from our recently-published IOPS survey report and in Slide 19 displayed the basic chart behind this post’s graphic (click to enlarge, or, better yet, right-click to open in a new tab). This chart compares how many IOPS our respondents said they need for the storage in their most important application, and compared that to the latency they required from this storage. For comparison’s sake we added a reference column on the left to roughly illustrate the latency of various standard forms of storage and memory.
You can see that we received a great variety of inputs spanning a very wide range of IOPS and latency needs, and that these didn’t all line up neatly as we would have anticipated. One failing of this chart format is that it doesn’t account for multiple replies for the same IOPS/latency combination: If we had been able to include that the chart would have shown a clearer trendline running from the top left to the lower right. Instead we have a band that broadly follows that trend of upper-left to lower-right.
Two other speakers presented the IOPS and latency that could be Continue reading “Latency, IOPS & NVDIMMs”
Why are HDD prices tracking SSD prices? Why don’t they cross over? These are questions that The SSD Guy is often asked, especially by people who anticipate a crossover in the near future.
In essence it’s because both the HDD industry and the semiconductor industry have set goals for themselves to achieve 30% average annual price reductions. If they are both on the same trajectory, and if there’s an order of magnitude difference between HDD and SSD prices today, then there will be an order of magnitude difference in the future as well.
The 30% average annual decline in SSD prices has a convenient name: Moore’s Law. Although there’s no physical, economic, or other restriction behind Moore’s Law (so it’s not really a law at all) it serves as a guide for the industry. Chip makers set their sights at doubling the number of transistors on a chip every couple of years, and this equates to average annual price decreases of 30%.
The HDD business also Continue reading “Why SSD and HDD Prices Move in Parallel”
This Sunday (Sept. 20, 2015) I will be presenting my company’s findings on the 3D XPoint memory that was introduced by Intel and Micron in July. I will be speaking at the Storage Networking Industry Association (SNIA) Storage Developer Conference (SDC) Pre-Conference Primer. You can click the name to be taken to the agenda.
This won’t be the only talk about persistent memory technology at the conference. Prior to my presentation storage consultants Tom Coughlin and Ed Grochowski will give an overview of advances in nonvolatile memories, and following my presentation will be two Intel talks.
Intel will be covering this new technology a lot during the conference. Of a total of 120 presentations at the conference and pre-conference primer, Intel will be presenting nine, seven of which directly name persistent memory or nonvolatile memory in the title. Other firms will also be talking about NVM: AgigA, Calypso, HP, Pure Storage, and SMART Modular. Even Microsoft alludes to it in a couple of its presentation titles. Persistent memory is a hot issue.
So, the question for readers of The SSD Guy blog is: “Will this do away with SSDs?”
This is a question that was Continue reading “3D XPoint Memory at the Storage Developer’s Conference”
From time to time IT managers ask The SSD Guy if there’s an easy way to compare SSDs made with MLC flash against those made using eMLC flash. Most folks understand that eMLC flash is a less costly alternative to SLC flash, both of which provide longer wear than standard MLC flash, but not everyone realizes that eMLC’s superior endurance comes at the cost of slower write speed. By writing to the flash more gently the technology can be made to last considerably longer.
So how do you compare the two? OCZ introduced MLC and eMLC versions of the same SSD this week, and this provides a beautiful opportunity to explore the difference.
As you would expect, the read parameters are all identical. This stands to reason, since Continue reading “MLC vs. eMLC – What’s the Difference?”
I have to admit that it’s embarrassing when The SSD Guy misses something important in the world of flash storage, but I only recently learned of a paper that Baidu, China’s leading search engine, presented at the ASPLOS conference a year ago. The paper details how Baidu changed the way they use flash to gain significant benefits over their original SSD-based systems.
After having deployed 300,000 standard SSDs over the preceding seven years, Baidu engineers looked for ways to achieve higher performance and more efficient use of the flash they were buying. Their approach was to strip the SSD of all functions that could be better performed by the host server, and to reconfigure the application software and operating system to make the best of flash’s idiosyncrasies.
You can only do this if you have control of both the system hardware and software.
The result was SDF, or “Software-Defined Flash”, a card that Continue reading “Baidu Goes Beyond SSDs”