The Storage Developer Conference in September gave a rare glimpse into two very different directions that SSD architectures are pursuing. While some of the conference’s presentations touted SSDs with increasing processing power (Eideticom, NGD, Samsung, and ScaleFlux) other presentations advocated moving processing power out of the SSD and into the host server (Alibaba, CNEX, and Western Digital).
Why would either of these make sense?
A standard SSD has a very high internal bandwidth that encounters a bottleneck as data is forced through a narrower interface. It’s easy to see that an SSD with 20+ NAND chips, each with an 8-bit interface, could access all 160 bits simultaneously. Since there’s already a processor inside the SSD, why not open it to external programming so that it can perform certain tasks within the SSD itself and harness all of that bandwidth?
Example tasks would include Continue reading “SSDs Need Controllers with More, NO! Less Power”
My friend and associate Eden Kim of Calypso Systems has published a new white paper on real workloads for SSDs.
This is the company that has helped the Storage Networking Industry Association (SNIA) to develop performance tests for SSDs that get past the issues that plague SSD users: Yes, it does well when it’s new, but how will an SSD perform after a year or two of service?
Calypso has recently published a new White Paper entitled: Datacenter Server Real World Workloads. This document analyzes real-life datacenter server workloads and performance to provide important insight into how an SSD might perform in actual environments rather than in synthesized workloads. It compares data center class SSDs against SAS HDDs to take a lot of the guessing out of issues about IOPS requirements, endurance needs, and so forth by comparing the measured activity over 24 hours of a 2,000-outlet retail chain web portal running SQL.
The tests in the paper represent a Continue reading “Getting the Most from Data Center SSDs”
This post is the second of a two-part SSD Guy series outlining the nonvolatile DIMM or NVDIMM. The first part explained what an NVDIMM is and how they are named. This second part describes the software used to support NVDIMMs (BIOS, operating system, and processor instructions) and discusses issues of security.
Today’s standard software boots a computer under the assumption that the memory at boot-up contains random bits — this needed to be changed to support NVDIMMs. The most fundamental of these changes was to the BIOS (Basic I/O Subsystem), the code that “wakes up” the computer.
The BIOS is responsible for detecting all of the computer’s hardware and installing the appropriate drivers, after which it loads the bootstrap program from the mass storage device into the DRAM main memory. When an NVDIMM is used the BIOS must Continue reading “An NVDIMM Primer (Part 2 of 2)”
NVDIMMs are gaining interest lately, so The SSD Guy thought it might be worthwhile to explain both what they are and how NVDIMM nomenclature works.
As I was writing it I noticed that the post got pretty long, so I have split it into two parts. The first part explains what an NVDIMM is and defines the names for today’s three kinds of NVDIMM. The second part tells about software changes used to support NVDIMMs in BIOS, operating systems, and even processor instruction sets. It also discusses the problem of security.
In case the name is unfamiliar, NVDIMM stands for “Nonvolatile Dual-Inline Memory Module.” Standard computer memory – DRAM – is inserted into the system in the DIMM form factor, but DRAM loses its data when power is removed. The NVDIMM is nonvolatile, or persistent, so its data remains intact despite a loss of power. This takes some effort and always costs more for reasons that will be explained shortly.
Although might seem a little odd to discuss memory in a forum devoted to SSDs, which are clearly storage, the NVDIMM is a storage device, so it rightly Continue reading “An NVDIMM Primer (Part 1 of 2)”
Sometimes it’s enlightening to compare several viewpoints on similar data. At yesterday’s SNIA Persistent Memory Summit a number of presentations provided interesting overlapping views on certain subjects.
One of particular interest to The SSD Guy was latency vs. IOPS. Tom Coughlin of Coughlin Associates and I presented the findings from our recently-published IOPS survey report and in Slide 19 displayed the basic chart behind this post’s graphic (click to enlarge, or, better yet, right-click to open in a new tab). This chart compares how many IOPS our respondents said they need for the storage in their most important application, and compared that to the latency they required from this storage. For comparison’s sake we added a reference column on the left to roughly illustrate the latency of various standard forms of storage and memory.
You can see that we received a great variety of inputs spanning a very wide range of IOPS and latency needs, and that these didn’t all line up neatly as we would have anticipated. One failing of this chart format is that it doesn’t account for multiple replies for the same IOPS/latency combination: If we had been able to include that the chart would have shown a clearer trendline running from the top left to the lower right. Instead we have a band that broadly follows that trend of upper-left to lower-right.
Two other speakers presented the IOPS and latency that could be Continue reading “Latency, IOPS & NVDIMMs”
This Sunday (Sept. 20, 2015) I will be presenting my company’s findings on the 3D XPoint memory that was introduced by Intel and Micron in July. I will be speaking at the Storage Networking Industry Association (SNIA) Storage Developer Conference (SDC) Pre-Conference Primer. You can click the name to be taken to the agenda.
This won’t be the only talk about persistent memory technology at the conference. Prior to my presentation storage consultants Tom Coughlin and Ed Grochowski will give an overview of advances in nonvolatile memories, and following my presentation will be two Intel talks.
Intel will be covering this new technology a lot during the conference. Of a total of 120 presentations at the conference and pre-conference primer, Intel will be presenting nine, seven of which directly name persistent memory or nonvolatile memory in the title. Other firms will also be talking about NVM: AgigA, Calypso, HP, Pure Storage, and SMART Modular. Even Microsoft alludes to it in a couple of its presentation titles. Persistent memory is a hot issue.
So, the question for readers of The SSD Guy blog is: “Will this do away with SSDs?”
This is a question that was Continue reading “3D XPoint Memory at the Storage Developer’s Conference”
There’s been a lot of interest in NVRAM recently. This technology has been lurking in the background for decades, and suddenly has become very popular.
What is NVRAM? Quite simply, it’s DRAM or SRAM that has a back-up flash memory a small controller, and a battery or super-capacitor. During operation the DRAM or SRAM is used in a system the same way that any DRAM or SRAM would be used. When power is interrupted the controller moves all of the data from the DRAM or SRAM to the flash using the backup power from the battery or super-capacitor. When power is restored, the controller moves the contents of the flash back into the SRAM or DRAM and the processor can resume operation where it left off.
In some ways it’s storage and in some ways it’s memory, so Continue reading “Where does NVRAM Fit?”
We have just finished up a webcast (now available for replay) that gives a preview of five of the solid state storage presentations that will be given at the upcoming Storage Plumbing and Data Engineering Conference – SPDEcon for short – hosted by the Storage Networking Industry Association (SNIA) on June 10-12 at the Hyatt Regency hotel in Santa Clara, California.
Our webcast, titled SPDEcon Solid State Preview, presents snippets of Continue reading “SPDEcon Sneak Peek”
Earlier today Tom Coughlin and I presented a BrightTalk webinar in league with the Storage Networking Industry Association (SNIA) to discuss our joint report: How Many IOPS is Enough?
The report is based upon a survey that asked IT managers about their enterprise IOPS requirements. The webinar gives a taste of the report’s contents, and explains the survey methodology. During the course of the webinar and at the end Tom and I answered a number of listener questions relating to the content.
The presentation also includes a little plug for SNIA’s client IOPS survey which is being run by downloading a program called the Workload I/O Capture Program, or “WIOCP.”
A replay of this webinar is available on the BrightTalk website.
The presentation was well received by our audience. Have a listen.
The following is excerpted from an Objective Analysis Brief e-mailed to our clients on 15 April, 2013:
On April 11 IBM kicked off “The IBM Flash Ahead Initiative”, committing to spend more than $1 billion for flash systems and software R&D and to open twelve IBM Flash Centers of Competency around the world staffed with flash experts armed with flash systems to help clients test drive flash in their own situations.
This follows from IBM’s August 2012 agreement to acquire privately-held Texas Memory Systems (TMS), a very low profile manufacturer of high-performance flash-based memory arrays and PCIe SSDs. TMS is the world’s oldest SSD maker, founded in 1976, to manufacture RAM-based replicas of HDDs. About four years ago TMS used its Continue reading “IBM to Invest $1B in Flash Promotion”