There have been numerous changes to SSDs since they moved into the mainstream 15 years ago, with controllers providing increasing, then decreasing endurance levels, and offering greater, then lesser levels of autonomy. What has been missing is any ability for the system to determine the level of performance that the SSD provides.
It recently dawned on me that one of the charts that I most frequently use in my presentations has never been explained in The SSD Guy blog. This is a serious oversight that I will correct with this post.
This post completes The SSD Guy’sfour-part series to help explain Intel’s two recently-announced modes of accessing its Optane DIMM, formally known as the “Intel Optane DC Persistent Memory.”
Comparing the Modes
In the second and third parts of this series we discussed Intel’s Memory Mode and the company’s App Direct Mode. This final part aims to compare the two: When would you use one and when the other?
There’s really no simple answer. As with all benchmarks, certain applications will perform better with one mode than with another, while other applications will behave the opposite way. Adding to the problem is the fact that App Direct Mode actually supports not one but four different access methods, which will be further explained below. As a rule of thumb performance for large serial accesses might be Continue reading “Intel’s Optane: Two Confusing Modes. Part 4) Comparing the Modes”
I was recently reminded of a presentation made by GoDaddy way back in the 2013 Flash Memory Summit in which I first heard the statement: “Failure is not an option — it is a requirement!” That’s certainly something that got my attention! It just sounded wrong.
In fact, this expression was used to describe a very pragmatic approach the company’s storage team had devised to determine the exact maximum load that could be supported by any piece of its storage system.
This is key, since, at the time, GoDaddy claimed to be the world’s largest web hosting service with 11 million users, 54 million domains registered, over 5 million hosting accounts, with a 99.9% uptime guarantee (although the internal goal was 99.999% – five nines!)
This post is a continuation of a four part series in The SSD Guy blog to help explain Intel’s two recently-announced modes of accessing its Optane DIMM, formally known as the “Intel Optane DC Persistent Memory.”
App Direct Mode
Intel’s App Direct Mode is the more interesting of the two Optane operating modes since it supports in-memory persistence, which opens up a new and different approach to improve the performance of tomorrow’s standard software. While today’s software operates under the assumption that data can only be persistent if it is written to slow storage (SSDs, HDDs, the cloud, etc.) Optane under App Direct Mode allows data to persist at memory speeds, as also do other nonvolatile memories like NVDIMMs under the SNIA NVM Programming Model.
This post is the second part of a four part series in The SSD Guy blog to help explain Intel’s two recently-announced modes of accessing its Optane DIMM, formally known as the “Intel Optane DC Persistent Memory.”
The most difficult thing to understand about the Intel Optane DC Persistent Memory when used in Memory Mode is that it is not persistent. Go back and read that again, because it didn’t make any sense the first time you read it. It didn’t make any sense the second time either, did it?
Intel recently announced two operating modes for the company’s new Optane DIMMs, formally known as “Intel Optane DC Persistent Memory.” The company has been trying to help the world to understand these two new operating modes but they are still pretty baffling to most of the people The SSD Guy speaks to. Some say that the concepts make their heads want to explode!
How does Optane’s “Memory Mode” work? How does “App Direct” Mode work? In this four-part series will try to provide some answers.
Like all of my NVDIMM-related posts, this series challenges me with the question: “Should it be published in The SSD Guy, or in The Memory Guy?” This is a point of endless confusion for me, since NVDIMM and Intel’s Optane blur the lines between Memory and Storage. I have elected to post this in The SSD Guy with the hope that it will be found by readers who want to understand Optane for its storage capabilities.
Memory Mode is the easy sell for the short term. It works with all current application software without modification. It just makes it look like you have a TON of DRAM.
Data centers that use centralized storage, SANs or NAS, sometimes use servers to cache stored data and thus accelerate the average speed of storage. These caching servers sit on the network between the compute servers and storage, using a program called memcached to replicate a portion of the data stored in the data center’s centralized storage. Under this form of management more-frequently-used data presents itself faster since it has been copied into a very large DRAM in the memcached server.
Such systems have been offset over the past five or more years thanks to the growing availability of high-speed enterprise SSDs at an affordable price. Often direct-attached storage (DAS) in the form of an SSD within each server can be used to accelerate throughput. This can provide a considerable cost/performance benefit over the memcached approach since DRAM costs about 20 times as much as the flash in an SSD. Even though the DRAM chips within the memcached server run about three orders of magnitude faster than a flash SSD most of that speed is lost because the DRAM communicates over a slow LAN, so the DAS SSD’s performance is comparable to that of the memcached appliance.
A few years ago The SSD Guy posted an analogy that Intel’s Jim Pappas uses to illustrate the latency differences between DRAM, an SSD, and an HDD. If we look at DRAM latency to be a single heartbeat, then what happens when we scale that timing up to represent SSDs and HDDs? How many heartbeats would it take to access either one, and what could you do in that time?
I still think it’s a pretty interesting way to make all these latency differences easier to understand.
Pricing is $44 for a 16GB module and $77 for 32GB. That’s $2.75 and $2.40 (respectively) per gigabyte, or about half the price of DRAM. Intel says that these products will ship on April 24.
What’s most interesting about Intel’s Optane pitch is that the company appears to be telling the world that SSDs are no longer important with its use of the slogan: “Get the speed, keep the capacity.” This message is designed to directly address the quandary that faces PC buyers when considering an SSD: Do they want an SSD’s speed so much that they are willing to accept either Continue reading “Intel Pits Optane SSDs Against NAND SSDs”
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.