With all the recent interest in CXL, and its ability to connect a processor to any memory, no matter the speed, it’s only natural that someone would try using it for SSDs. This notion is the basis for the Memory-Semantic SSD, or MS-SSD.
But MS-SSDs suffer from the same problem as SSDs, hard drives, and other mass storage. The basic concept requires Continue reading “NVMe-oC: Wolley’s New Take on CXL-Based SSDs”
On June 27, The SSD Guy moderated a 1-hour CXL webinar, sponsored by the Storage Networking Industry Association (SNIA). The slides are already online at the SNIA Educational Library, and a recorded video is available on BrightTalk. Both attendees and those who missed it can now take advantage of this rare collection of Continue reading “SNIA CXL Webinar”
There have been numerous changes to SSDs since they moved into the mainstream 15 years ago, with controllers providing increasing, then decreasing endurance levels, and offering greater, then lesser levels of autonomy. What has been missing is any ability for the system to determine the level of performance that the SSD provides.
Recently Kioxia, the company formerly known as Toshiba Memory, announced Continue reading “What’s Software-Enabled Flash?”
It recently dawned on me that one of the charts that I most frequently use in my presentations has never been explained in The SSD Guy blog. This is a serious oversight that I will correct with this post.
The Memory/Storage Hierarchy (also called the Storage/Memory Hierarchy, depending on your perspective) is a very simply way to Continue reading “The Memory/Storage Hierarchy”
This post completes The SSD Guy’s four-part series to help explain Intel’s two recently-announced modes of accessing its Optane DIMM, formally known as the “Intel Optane DC Persistent Memory.”
Comparing the Modes
In the second and third parts of this series we discussed Intel’s Memory Mode and the company’s App Direct Mode. This final part aims to compare the two: When would you use one and when the other?
There’s really no simple answer. As with all benchmarks, certain applications will perform better with one mode than with another, while other applications will behave the opposite way. Adding to the problem is the fact that App Direct Mode actually supports not one but four different access methods, which will be further explained below. As a rule of thumb performance for large serial accesses might be Continue reading “Intel’s Optane: Two Confusing Modes. Part 4) Comparing the Modes”
I was recently reminded of a presentation made by GoDaddy way back in the 2013 Flash Memory Summit in which I first heard the statement: “Failure is not an option — it is a requirement!” That’s certainly something that got my attention! It just sounded wrong.
In fact, this expression was used to describe a very pragmatic approach the company’s storage team had devised to determine the exact maximum load that could be supported by any piece of its storage system.
This is key, since, at the time, GoDaddy claimed to be the world’s largest web hosting service with 11 million users, 54 million domains registered, over 5 million hosting accounts, with a 99.9% uptime guarantee (although the internal goal was 99.999% – five nines!)
The presenters outlined four stages of how validation processes had Continue reading “Failure is Not an Option — It’s a Requirement!”
This post is a continuation of a four part series in The SSD Guy blog to help explain Intel’s two recently-announced modes of accessing its Optane DIMM, formally known as the “Intel Optane DC Persistent Memory.”
App Direct Mode
Intel’s App Direct Mode is the more interesting of the two Optane operating modes since it supports in-memory persistence, which opens up a new and different approach to improve the performance of tomorrow’s standard software. While today’s software operates under the assumption that data can only be persistent if it is written to slow storage (SSDs, HDDs, the cloud, etc.) Optane under App Direct Mode allows data to persist at memory speeds, as also do other nonvolatile memories like NVDIMMs under the SNIA NVM Programming Model.
App Direct Mode implements the full SNIA NVM Programming Model described in an earlier SSD Guy post and allows software to Continue reading “Intel’s Optane: Two Confusing Modes. Part 3) App Direct Mode”
This post is the second part of a four part series in The SSD Guy blog to help explain Intel’s two recently-announced modes of accessing its Optane DIMM, formally known as the “Intel Optane DC Persistent Memory.”
The most difficult thing to understand about the Intel Optane DC Persistent Memory when used in Memory Mode is that it is not persistent. Go back and read that again, because it didn’t make any sense the first time you read it. It didn’t make any sense the second time either, did it?
Don’t worry. This is not really important. The difficulty stems from Intel’s marketing decision to call Optane DIMMs by the name “Intel Optane DC Persistent Memory.” Had they simply called them “Optane DIMMs” like everyone expected them to then there would have been Continue reading “Intel’s Optane: Two Confusing Modes. Part 2) Memory Mode”
Intel recently announced two operating modes for the company’s new Optane DIMMs, formally known as “Intel Optane DC Persistent Memory.” The company has been trying to help the world to understand these two new operating modes but they are still pretty baffling to most of the people The SSD Guy speaks to. Some say that the concepts make their heads want to explode!
How does Optane’s “Memory Mode” work? How does “App Direct” Mode work? In this four-part series will try to provide some answers.
Like all of my NVDIMM-related posts, this series challenges me with the question: “Should it be published in The SSD Guy, or in The Memory Guy?” This is a point of endless confusion for me, since NVDIMM and Intel’s Optane blur the lines between Memory and Storage. I have elected to post this in The SSD Guy with the hope that it will be found by readers who want to understand Optane for its storage capabilities.
Memory Mode is the easy sell for the short term. It works with all current application software without modification. It just makes it look like you have a TON of DRAM.
App Direct Mode is really cool if Continue reading “Intel’s Optane: Two Confusing Modes. Part 1) Overview”
Data centers that use centralized storage, SANs or NAS, sometimes use servers to cache stored data and thus accelerate the average speed of storage. These caching servers sit on the network between the compute servers and storage, using a program called memcached to replicate a portion of the data stored in the data center’s centralized storage. Under this form of management more-frequently-used data presents itself faster since it has been copied into a very large DRAM in the memcached server.
Such systems have been offset over the past five or more years thanks to the growing availability of high-speed enterprise SSDs at an affordable price. Often direct-attached storage (DAS) in the form of an SSD within each server can be used to accelerate throughput. This can provide a considerable cost/performance benefit over the memcached approach since DRAM costs about 20 times as much as the flash in an SSD. Even though the DRAM chips within the memcached server run about three orders of magnitude faster than a flash SSD most of that speed is lost because the DRAM communicates over a slow LAN, so the DAS SSD’s performance is comparable to that of the memcached appliance.
There’ a catch to this approach, since the DAS SSD must be Continue reading “A New Spin on Memcache”