Jim Handy

Getting the Most from Data Center SSDs

2017-09-19 Calypso Real World Workload TestMy friend and associate Eden Kim of Calypso Systems has published a new white paper on real workloads for SSDs.

This is the company that has helped the Storage Networking Industry Association (SNIA) to develop performance tests for SSDs that get past the issues that plague SSD users: Yes, it does well when it’s new, but how will an SSD perform after a year or two of service?

Calypso has recently published a new White Paper entitled: Datacenter Server Real World Workloads.  This document analyzes real-life datacenter server workloads and performance to provide important insight into how an SSD might perform in actual environments rather than in synthesized workloads.  It compares data center class SSDs against SAS HDDs to take a lot of the guessing out of issues about IOPS requirements, endurance needs, and so forth by comparing the measured activity over 24 hours of a 2,000-outlet retail chain web portal running SQL.

The tests in the paper represent a “bake-off” that runs the workload through four different SSDs and one SAS HDD, with results illustrated in numerous charts, one of which serves as this post’s graphic.

The tests in the white paper are based on a simple program that can be downloaded to capture and analyze your own real-world workloads.  It is available for free download at TestMyWorkload.com. and includes I/O capture, data analytics and real world workload tests.

Once a system’s real world workloads has been captured it can be used to evaluate the different kinds of storage that the system administrator is considering using in the system.

Give it a try.  Visit the Test My Workload page to learn more.

An NVDIMM Primer (Part 2 of 2)

AgigA RamCardTwoThis post is the second of a two-part SSD Guy series outlining the nonvolatile DIMM or NVDIMM.  The first part explained what an NVDIMM is and how they are named.  This second part describes the software used to support NVDIMMs (BIOS, operating system, and processor instructions) and discusses issues of security.

Software Changes

Today’s standard software boots a computer under the assumption that the memory at boot-up contains random bits — this needed to be changed to support NVDIMMs.  The most fundamental of these changes was to the BIOS (Basic I/O Subsystem), the code that “wakes up” the computer.

The BIOS is responsible for detecting all of the computer’s hardware and installing the appropriate drivers, after which it loads the bootstrap program from the mass storage device into the DRAM main memory.  When an NVDIMM is used the BIOS must Continue reading

NGD’s New “In-Situ Processing” SSD

NGD In Situ graphicStart-up NGD Systems (formerly NxGenData) has just announced the availability of an SSD with in situ processing – that is, the SSD can actually process data rather than simply store it.  The new “Catalina 2” SSD is said to have the ability to run advanced applications directly on the drive.

NGD tells us that the SSD, which comes in both U.2 and AIC (PCIe add-in card) formats, is currently available for purchase.

If your memory is long enough you may recall that The SSD Guy wrote a post four years ago about something like this.  At the 2013 Flash Memory Summit Micron Technology delivered a keynote detailing a research project in which they reprogrammed SSDs so that each SSD in a system could perform basic database management functions.

Although Micron demonstrated significant advantages of using of this approach, nobody, not even Micron, has followed through with a product until now.

NGD briefed me and explained that the data explosion expected with the Internet of Things will not Continue reading

An NVDIMM Primer (Part 1 of 2)

NVDIMMs are gaining interest lately, so The SSD Guy thought it might be worthwhile to explain both what they are and how NVDIMM nomenclature works.

As I was writing it I noticed that the post got pretty long, so I have split it into two parts.  The first part explains what an NVDIMM is and defines the names for today’s three kinds of NVDIMM.  The second part tells about software changes used to support NVDIMMs in BIOS, operating systems, and even processor instruction sets.  It also discusses the problem of security.

In case the name is unfamiliar, NVDIMM stands for “Nonvolatile Dual-Inline Memory Module.”  Standard computer memory – DRAM – is inserted into the system in the DIMM form factor, but DRAM loses its data when power is removed.  The NVDIMM is nonvolatile, or persistent, so its data remains intact despite a loss of power.  This takes some effort and always costs more for reasons that will be explained shortly.

Although might seem a little odd to discuss memory in a forum devoted to SSDs, which are clearly storage, the NVDIMM is a storage device, so it rightly Continue reading

IBM Aligns Itself with High Speed NVMe-based Storage

NVMe LogoIBM has announced that it is developing Non-Volatile Memory Express (NVMe) solutions to provide significantly lower latency storage.

NVMe is an interface protocol designed to replace the established SAS and SATA interfaces that are currently used for hard drives and SSDs. Coupled with the PCIe hardware backplane, NVMe uses parallelism and high queue depths to significantly reduce delays caused by data bottlenecks and move higher volumes of data within existing flash storage systems.

IBM has set itself to the task of optimizing the entire storage hierarchy, from the applications software to flash storage hardware, and is re-tooling the end-to-end storage stack to support NVMe. The company recognized years ago that both hardware and software would need to be redesigned to satisfy the needs of ultra-low latency data processing.

The company last year released products with Continue reading

Comparing Wear Figures on SSDs

DWPD TBW GB/Day TriangleI have been receiving questions lately from people who are puzzled when companies use different parameters than their competitors use to specify the endurance of their SSDs.  How do you compare one against the other?  Some companies even switch from one parameter to another to define the endurance of different SSDs within their product line.

I have found that Intel uses three different endurance measures for its products: DWPD (drive writes per day), TBW (terabytes written), and GB/day.

There’s not any real difference between any of these measures – each one is one way of stating how many times each of the SSD’s locations can be overwritten before the drive has gone past its warrantied life.

The relationships between these three measures are illustrated in this post’s graphic.  You can click on it to see an expanded version.  It’s all pretty simple.  We’ll spell out the relationships in detail below, but in brief, if you want to compare Continue reading

Extreme ECC Enables Big SSD Advances

Combined University Seals Trzetrzelewska Univerity & UN-NeWA new and highly-efficient error correction scheme has recently been revealed by a joint university research team.  The SSD Guy has learned that this largely-overlooked research, performed by a cross-university team from University of North by Northeast Wales in the UK (UN-NeW) and Poland’s Trzetrzelewska University, could bring great economies to SSD manufacturers and all-flash array (AFA) companies.

Dr. Peter Llanfairpullguryngyllgogeryohuryrndrodullllantysiliogogogoch of UN-NeW, who generally shortens his name to Llanfairpullguryngyll and Dr. Agnieszka Włotrzewiszczykowycki of Trzetrzelewska University have determined that today’s more standard ECC engines can be dramatically improved upon to both increase available storage for a given price while accelerating throughput.  This is achieved through the use of new and highly complex algorithms that differ radically from current ECC approaches that are simply linear improvements upon past algorithms.

According to Dr. Włotrzewiszczykowycki: “The beauty of semiconductors is that Moore’s Law not only allows Continue reading

Intel Pits Optane SSDs Against NAND SSDs

Intel's Optane PyramidOnly a week after announcing its Optane Enterprise SSDs Intel has launched m.2-format Optane SSDs for end users.  It appears that we are at the onset of an Optane surge.

These SSDs communicate over the PCIe bus bringing more of the 3D XPoint’s performance to the user than would a SATA interface.

Pricing is $44 for a 16GB module and $77 for 32GB.  That’s $2.75 and $2.40 (respectively) per gigabyte, or about half the price of DRAM.  Intel says that these products will ship on April 24.

What’s most interesting about Intel’s Optane pitch is that the company appears to be telling the world that SSDs are no longer important with its use of the slogan: “Get the speed, keep the capacity.” This message is designed to directly address the quandary that faces PC buyers when considering an SSD: Do they want an SSD’s speed so much that they are willing to accept either Continue reading

NGD’s 24TB SSD Is Just The First Step

NGD LogoWith the tagline: “Bringing intelligence to storage” start-up NGD Systems, formerly known as NexGen Data, has announced a 24 terabyte SSD that the company claims to be the highest-capacity PCIe/NVMe device available.

The read-optimized Catalina SSD employs a lot of proprietary NGD technology: Variable rate LDPC error correction, unique DSP (digital signal processing) algorithms, and an “Elastic” flash transition layer (FTL), all embodied in an NGD-proprietary controller.  This proprietary technology allows Catalina to offer enterprise performance and reliability while using TLC flash and less DRAM than other designs.

NGD claims that the product is already shipping and is being qualified by major OEMs.

Based on some of the company’s presentations at past years’ Flash Memory Summits the controller has been carefully balanced to optimize cost, throughput, and heat.  This last is a bigger problem than most folks would imagine.  At the 2013 Hot Chips conference a former Violin Memory engineering manager told the audience Continue reading

Intel Announces Optane SSDs for the Enterprise

Intel-Optane-SSDThis week Intel announced the Optane SSD DC P4800X Series, new enterprise SSDs based on the company’s 3D XPoint memory technology which Intel says is the first new memory technology to be introduced since 1989.  The technology was introduced to fill a price/performance gap that might impede Intel’s sales of high-performance CPUs.

Intel was all aglow with the promise of performance, claiming that the newly-released SSDs offer: “Consistently amazing response time under load.”

Since the early 1990s Intel has realized that it needs for the platform’s performance to keep pace with the ongoing performance increases of its new processors.  A slow platform will limit the performance of any processor, and if customers don’t see any benefit from purchasing a more expensive processor, then Intel will be unable to keep its processor prices high.

Recently NAND flash SSDs have helped Intel to improve the platform’s speed, as did the earlier migration of Continue reading