There have been numerous changes to SSDs since they moved into the mainstream 15 years ago, with controllers providing increasing, then decreasing endurance levels, and offering greater, then lesser levels of autonomy. What has been missing is any ability for the system to determine the level of performance that the SSD provides.
Recently Kioxia, the company formerly known as Toshiba Memory, announced Continue reading “What’s Software-Enabled Flash?”
At its October Insight Conference Micron Technology finally revealed its 3D XPoint SSD, dubbed the X100.
While the company didn’t disclose too much about the device, it did brag about its speed, claiming that the X100 is the world’s fastest SSD, running three times faster than the fastest NAND flash SSDs and almost three times the speed of other XPoint SSDs. The product is said to Continue reading “Micron’s New XPoint SSD Finally Arrives”
It recently dawned on me that one of the charts that I most frequently use in my presentations has never been explained in The SSD Guy blog. This is a serious oversight that I will correct with this post.
The Memory/Storage Hierarchy (also called the Storage/Memory Hierarchy, depending on your perspective) is a very simply way to Continue reading “The Memory/Storage Hierarchy”
I was recently reminded of a presentation made by GoDaddy way back in the 2013 Flash Memory Summit in which I first heard the statement: “Failure is not an option — it is a requirement!” That’s certainly something that got my attention! It just sounded wrong.
In fact, this expression was used to describe a very pragmatic approach the company’s storage team had devised to determine the exact maximum load that could be supported by any piece of its storage system.
This is key, since, at the time, GoDaddy claimed to be the world’s largest web hosting service with 11 million users, 54 million domains registered, over 5 million hosting accounts, with a 99.9% uptime guarantee (although the internal goal was 99.999% – five nines!)
The presenters outlined four stages of how validation processes had Continue reading “Failure is Not an Option — It’s a Requirement!”
Although the Trim command has been defined for nearly a decade, for some reason I have never written a post to explain it. It’s time for that to change.
Trim is something that was never required for HDDs, so it was a new command that was defined once SSDs became prevalent. The command is required because of one of those awkward encumbrances that NAND users must accommodate: Erase before write.
NAND flash bits cannot be altered the same way as an HDD. In an HDD a bit that’s currently set to a “1” can be re-written to a “0” and vice versa. Writing a bit either way takes the same amount of time. In NAND flash a 1 can be written to a zero, but the opposite is not the case. Instead, the entire block (4-16k bytes) must be erased at once, after which all bits are set to a 1. Once that has been done then zeros can be written into that block to store data. An erase is an excruciatingly slow operation, taking up to a half second to perform. Writes are faster, but they’re still slow.
Let’s say that a program needs to Continue reading “What is an SSD Trim Command?”
The Storage Developer Conference in September gave a rare glimpse into two very different directions that SSD architectures are pursuing. While some of the conference’s presentations touted SSDs with increasing processing power (Eideticom, NGD, Samsung, and ScaleFlux) other presentations advocated moving processing power out of the SSD and into the host server (Alibaba, CNEX, and Western Digital).
Why would either of these make sense?
A standard SSD has a very high internal bandwidth that encounters a bottleneck as data is forced through a narrower interface. It’s easy to see that an SSD with 20+ NAND chips, each with an 8-bit interface, could access all 160 bits simultaneously. Since there’s already a processor inside the SSD, why not open it to external programming so that it can perform certain tasks within the SSD itself and harness all of that bandwidth?
Example tasks would include Continue reading “SSDs Need Controllers with More, NO! Less Power”
A few years ago The SSD Guy posted an analogy that Intel’s Jim Pappas uses to illustrate the latency differences between DRAM, an SSD, and an HDD. If we look at DRAM latency to be a single heartbeat, then what happens when we scale that timing up to represent SSDs and HDDs? How many heartbeats would it take to access either one, and what could you do in that time?
I still think it’s a pretty interesting way to make all these latency differences easier to understand.
Just recently I learned of a Rich Report video of a 2015 presentation in which Micron’s Ryan Baxter uses a different and equally interesting analogy based on tomatoes.
Tomatoes aren’t the first thing that comes to my mind when I think about SSDs, but this video may change my way of thinking!
The tomato slide, 9:30 into the presentation, is Continue reading “Comparing SSDs to Tomatoes”
My friend and associate Eden Kim of Calypso Systems has published a new white paper on real workloads for SSDs.
This is the company that has helped the Storage Networking Industry Association (SNIA) to develop performance tests for SSDs that get past the issues that plague SSD users: Yes, it does well when it’s new, but how will an SSD perform after a year or two of service?
Calypso has recently published a new White Paper entitled: Datacenter Server Real World Workloads. This document analyzes real-life datacenter server workloads and performance to provide important insight into how an SSD might perform in actual environments rather than in synthesized workloads. It compares data center class SSDs against SAS HDDs to take a lot of the guessing out of issues about IOPS requirements, endurance needs, and so forth by comparing the measured activity over 24 hours of a 2,000-outlet retail chain web portal running SQL.
The tests in the paper represent a Continue reading “Getting the Most from Data Center SSDs”
This post is the second of a two-part SSD Guy series outlining the nonvolatile DIMM or NVDIMM. The first part explained what an NVDIMM is and how they are named. This second part describes the software used to support NVDIMMs (BIOS, operating system, and processor instructions) and discusses issues of security.
Today’s standard software boots a computer under the assumption that the memory at boot-up contains random bits — this needed to be changed to support NVDIMMs. The most fundamental of these changes was to the BIOS (Basic I/O Subsystem), the code that “wakes up” the computer.
The BIOS is responsible for detecting all of the computer’s hardware and installing the appropriate drivers, after which it loads the bootstrap program from the mass storage device into the DRAM main memory. When an NVDIMM is used the BIOS must Continue reading “An NVDIMM Primer (Part 2 of 2)”
NVDIMMs are gaining interest lately, so The SSD Guy thought it might be worthwhile to explain both what they are and how NVDIMM nomenclature works.
As I was writing it I noticed that the post got pretty long, so I have split it into two parts. The first part explains what an NVDIMM is and defines the names for today’s three kinds of NVDIMM. The second part tells about software changes used to support NVDIMMs in BIOS, operating systems, and even processor instruction sets. It also discusses the problem of security.
In case the name is unfamiliar, NVDIMM stands for “Nonvolatile Dual-Inline Memory Module.” Standard computer memory – DRAM – is inserted into the system in the DIMM form factor, but DRAM loses its data when power is removed. The NVDIMM is nonvolatile, or persistent, so its data remains intact despite a loss of power. This takes some effort and always costs more for reasons that will be explained shortly.
Although might seem a little odd to discuss memory in a forum devoted to SSDs, which are clearly storage, the NVDIMM is a storage device, so it rightly Continue reading “An NVDIMM Primer (Part 1 of 2)”