SSD Performance

What’s Software-Enabled Flash?

abstract eye-catching imageThere have been numerous changes to SSDs since they moved into the mainstream 15 years ago, with controllers providing increasing, then decreasing endurance levels, and offering greater, then lesser levels of autonomy.  What has been missing is any ability for the system to determine the level of performance that the SSD provides.

Recently Kioxia, the company formerly known as Toshiba Memory, announced a new initiative called “Software-Enabled Flash”, that aims to provide a consistent interface between software and SSDs that allows the software to choose the level of involvement it wants to have in the SSD’s behavior.

First, let’s talk a little bit about the problem.  NAND flash memory requires significant management.  The whole concept of NAND flash is that it’s OK for it to be phenomenally difficult to work with as long as it’s the cheapest memory available.  Here’s a list of a few of the reasons Continue reading

Micron’s New XPoint SSD Finally Arrives

Micron X100 3D XPoint SSDAt its October Insight Conference Micron Technology finally revealed its 3D XPoint SSD, dubbed the X100.

While the company didn’t disclose too much about the device, it did brag about its speed, claiming that the X100 is the world’s fastest SSD, running three times faster than the fastest NAND flash SSDs and almost three times the speed of other XPoint SSDs.  The product is said to provide a very impressive 2.5 million IOPS for 4kB random reads at a queue depth of 1 and to support a 9GB/second bandwidth in read, write, and mixed traffic.  (NAND flash SSDs perform much better at reads than at writes due to the underlying NAND chips’ extraordinarily lopsided read and write specifications, among other quirks.)

Just as Intel has done, Micron plans to introduce storage products first, before bringing out a memory module to fit into a DIMM slot.

How did Micron achieve this impressive level of performance?  Well, in addition to Continue reading

The Memory/Storage Hierarchy

Memory/Storage ThumbnailIt recently dawned on me that one of the charts that I most frequently use in my presentations has never been explained in The SSD Guy blog.  This is a serious oversight that I will correct with this post.

The Memory/Storage Hierarchy (also called the Storage/Memory Hierarchy, depending on your perspective) is a very simply way to explain why there are multiple memory and storage types within a system: Why is there a cache memory, or a DRAM, or an HDD?  The simple answer is that you can improve the system’s cost/performance ratio of you break the system down into an appropriate mix of fast & slow, expensive & cheap memory or storage.

To explain this I go way back to the 1960s and review the concept of “Virtual Memory”.  This concept was first commercialized by computer maker Burroughs, although it was first implemented by the University of Manchester in England.  The basic concept was to provide programmers with an extraordinarily large memory in which to run their programs by fooling the program into thinking that the memory was as large as the magnetic disk.

I actually look at it from Continue reading

Failure is Not an Option — It’s a Requirement!

I was recently reminded of a presentation made by GoDaddy way back in the 2013 Flash Memory Summit in which I first heard the statement: “Failure is not an option — it is a requirement!”  That’s certainly something that got my attention!  It just sounded wrong.

In fact, this expression was used to describe a very pragmatic approach the company’s storage team had devised to determine the exact maximum load that could be supported by any piece of its storage system.

This is key, since, at the time, GoDaddy claimed to be the world’s largest web hosting service with 11 million users, 54 million domains registered, over 5 million hosting accounts, with a 99.9% uptime guarantee (although the internal goal was 99.999% – five nines!)

The presenters outlined four stages of how validation processes had Continue reading

What is an SSD Trim Command?

TrimmerAlthough the Trim command has been defined for nearly a decade, for some reason I have never written a post to explain it.  It’s time for that to change.

Trim is something that was never required for HDDs, so it was a new command that was defined once SSDs became prevalent.  The command is required because of one of those awkward encumbrances that NAND users must accommodate: Erase before write.

NAND flash bits cannot be altered the same way as an HDD.  In an HDD a bit that’s currently set to a “1” can be re-written to a “0” and vice versa.  Writing a bit either way takes the same amount of time.  In NAND flash a 1 can be written to a zero, but the opposite is not the case.  Instead, the entire block (4-16k bytes) must be erased at once, after which all bits are set to a 1.  Once that has been done then zeros can be written into that block to store data.  An erase is an excruciatingly slow operation, taking up to a half second to perform.  Writes are faster, but they’re still slow.

Let’s say that a program needs to Continue reading

SSDs Need Controllers with More, NO! Less Power

More Power-Less PowerThe Storage Developer Conference in September gave a rare glimpse into two very different directions that SSD architectures are pursuing.  While some of the conference’s presentations touted SSDs with increasing processing power (Eideticom, NGD, Samsung, and ScaleFlux) other presentations advocated moving processing power out of the SSD and into the host server (Alibaba, CNEX, and Western Digital).

Why would either of these make sense?

A standard SSD has a very high internal bandwidth that encounters a bottleneck as data is forced through a narrower interface.  It’s easy to see that an SSD with 20+ NAND chips, each with an 8-bit interface, could access all 160 bits simultaneously.  Since there’s already a processor inside the  SSD, why not open it to external programming so that it can perform certain tasks within the SSD itself and harness all of that bandwidth?

Example tasks would include Continue reading

Comparing SSDs to Tomatoes

TomatoA few years ago The SSD Guy posted an analogy that Intel’s Jim Pappas uses to illustrate the latency differences between DRAM, an SSD, and an HDD.  If we look at DRAM latency to be a single heartbeat, then what happens when we scale that timing up to represent SSDs and HDDs?  How many heartbeats would it take to access either one, and what could you do in that time?

I still think it’s a pretty interesting way to make all these latency differences easier to understand.

Just recently I learned of a Rich Report video of a 2015 presentation in which Micron’s Ryan Baxter uses a different and equally interesting analogy based on tomatoes.

Tomatoes aren’t the first thing that comes to my mind when I think about SSDs, but this video may change my way of thinking!

The tomato slide, 9:30 into the presentation, is Continue reading

Getting the Most from Data Center SSDs

2017-09-19 Calypso Real World Workload TestMy friend and associate Eden Kim of Calypso Systems has published a new white paper on real workloads for SSDs.

This is the company that has helped the Storage Networking Industry Association (SNIA) to develop performance tests for SSDs that get past the issues that plague SSD users: Yes, it does well when it’s new, but how will an SSD perform after a year or two of service?

Calypso has recently published a new White Paper entitled: Datacenter Server Real World Workloads.  This document analyzes real-life datacenter server workloads and performance to provide important insight into how an SSD might perform in actual environments rather than in synthesized workloads.  It compares data center class SSDs against SAS HDDs to take a lot of the guessing out of issues about IOPS requirements, endurance needs, and so forth by comparing the measured activity over 24 hours of a 2,000-outlet retail chain web portal running SQL.

The tests in the paper represent a Continue reading

An NVDIMM Primer (Part 2 of 2)

AgigA RamCardTwoThis post is the second of a two-part SSD Guy series outlining the nonvolatile DIMM or NVDIMM.  The first part explained what an NVDIMM is and how they are named.  This second part describes the software used to support NVDIMMs (BIOS, operating system, and processor instructions) and discusses issues of security.

Software Changes

Today’s standard software boots a computer under the assumption that the memory at boot-up contains random bits — this needed to be changed to support NVDIMMs.  The most fundamental of these changes was to the BIOS (Basic I/O Subsystem), the code that “wakes up” the computer.

The BIOS is responsible for detecting all of the computer’s hardware and installing the appropriate drivers, after which it loads the bootstrap program from the mass storage device into the DRAM main memory.  When an NVDIMM is used the BIOS must Continue reading

An NVDIMM Primer (Part 1 of 2)

NVDIMMs are gaining interest lately, so The SSD Guy thought it might be worthwhile to explain both what they are and how NVDIMM nomenclature works.

As I was writing it I noticed that the post got pretty long, so I have split it into two parts.  The first part explains what an NVDIMM is and defines the names for today’s three kinds of NVDIMM.  The second part tells about software changes used to support NVDIMMs in BIOS, operating systems, and even processor instruction sets.  It also discusses the problem of security.

In case the name is unfamiliar, NVDIMM stands for “Nonvolatile Dual-Inline Memory Module.”  Standard computer memory – DRAM – is inserted into the system in the DIMM form factor, but DRAM loses its data when power is removed.  The NVDIMM is nonvolatile, or persistent, so its data remains intact despite a loss of power.  This takes some effort and always costs more for reasons that will be explained shortly.

Although might seem a little odd to discuss memory in a forum devoted to SSDs, which are clearly storage, the NVDIMM is a storage device, so it rightly Continue reading