This post is the second of a two-part SSD Guy series outlining the nonvolatile DIMM or NVDIMM. The first part explained what an NVDIMM is and how they are named. This second part describes the software used to support NVDIMMs (BIOS, operating system, and processor instructions) and discusses issues of security.
Today’s standard software boots a computer under the assumption that the memory at boot-up contains random bits — this needed to be changed to support NVDIMMs. The most fundamental of these changes was to the BIOS (Basic I/O Subsystem), the code that “wakes up” the computer.
The BIOS is responsible for detecting all of the computer’s hardware and installing the appropriate drivers, after which it loads the bootstrap program from the mass storage device into the DRAM main memory. When an NVDIMM is used the BIOS must Continue reading “An NVDIMM Primer (Part 2 of 2)”
NVDIMMs are gaining interest lately, so The SSD Guy thought it might be worthwhile to explain both what they are and how NVDIMM nomenclature works.
As I was writing it I noticed that the post got pretty long, so I have split it into two parts. The first part explains what an NVDIMM is and defines the names for today’s three kinds of NVDIMM. The second part tells about software changes used to support NVDIMMs in BIOS, operating systems, and even processor instruction sets. It also discusses the problem of security.
In case the name is unfamiliar, NVDIMM stands for “Nonvolatile Dual-Inline Memory Module.” Standard computer memory – DRAM – is inserted into the system in the DIMM form factor, but DRAM loses its data when power is removed. The NVDIMM is nonvolatile, or persistent, so its data remains intact despite a loss of power. This takes some effort and always costs more for reasons that will be explained shortly.
Although might seem a little odd to discuss memory in a forum devoted to SSDs, which are clearly storage, the NVDIMM is a storage device, so it rightly Continue reading “An NVDIMM Primer (Part 1 of 2)”
I have just added a new white paper onto the Objective Analysis website: Matching Flash to the Processor – Why Multithreading Needs Parallelized Flash.
This document examines the evolution of today’s CPUs, whose clock frequencies have stopped increasing, but now exploit parallelism to scale performance. Multiple DRAM channels have also been added to performance computing to add parallelism to the memory channel.
Storage hasn’t kept pace with this move to parallelism and that is limiting today’s systems.
New NAND flash DIMMs recently introduced by Diablo, SanDisk, and IBM, provide a reasonable approach to adding parallel flash to a system on the its fastest bus – the memory channel. This white paper shows that storage can be scaled to match the processor’s growing performance by adding flash DIMMs to each of the many DRAM buses in a performance server.
The white paper is downloadable for free from the Objective Analysis home page. Have a look.
On Thursday IBM announced its X6 product family, the sixth generation of the company’s successful EXA server architecture. A smaller byline of the introduction was the company’s new eXFlash memory-channel storage or eXFlash DIMM which is offered as one of many flash options available to X6 users.
Close followers of The SSD Guy already know that I am a serious advocate of putting flash onto the memory bus. Why slow the technology down by Continue reading “IBM Launches Flash DIMMs”
Today NAND flash is being shoehorned into HDD formats simply because it is persistent – the data doesn’t disappear when the lights go out. This approach fails to take advantage of NAND’s greatest strength – its low cost relative to DRAM – and this prevents it from fully meeting the needs of most data centers.
Since 2004 NAND has been cheaper than DRAM, and today its price per gigabyte is an order of magnitude lower than that of DRAM. NAND is cheaper and slower than DRAM, and HDD is cheaper and slower than NAND.
A role better suited to NAND flash technology is Continue reading “White Paper: Using Flash as Memory”
The following is excerpted from an Objective Analysis Alert e-mailed to our clients on 2 July, 2013:
SanDisk Corporation announced on 2 July 2013 an agreement to acquire SMART Storage Systems, the SSD arm of SMART Modular Technologies, for $307 million in cash and equity. The transaction is expected to close in August, 2013.
SMART has strong SSD technology that allows the company to ship MLC-based SSDs with endurance specifications superior to those of some SLC SSDs. The SSD maker had shipments of about $25M in its most recent quarter.
The SMART acquisition will be the fourth Continue reading “SanDisk to Acquire SMART Storage”
SSD-watchers have expressed some concern over the last few years that SSDs cannot be manufactured using advanced NAND flash process geometries. This is because these parts have lower endurance and a larger number of bit errors than NAND made using less-advanced processes – the tighter the process, the shorter the flash’s life, and the more errors it will have.
Fortunately these concerns seem to be Continue reading “19nm & 20nm SSDs Arrive!”
SMART Storage Systems has introduced a new enterprise-class SSD that the company says: “increases the endurance of cMLC Flash to a level that makes SLC drives obsolete.” That’s a pretty hefty claim!
The new Optimus Ultra+ SSD is specified at 100K read IOPS and 60K write IOPS, through its 6Gb/s SAS interface. With capacities ranging from 100-800GB, this SSD supports up to 50 full drive writes per day over its 5-year lifespan, double that of the company’s Optimus Ultra which was introduced in February. That’s quite something for an MLC-based SSD.
SMART has tapped into its Guardian technology to reap SLC benefits from MLC flash through both enhanced external and internal algorithms. Like all other SSD makers and SSD controller makers SMART has focused a lot of attention on error correction, DSP, and other means of correcting errors externally to the flash. The company has also partnered with Continue reading “SMART Optimus Ultra+ SSD: SLC Performance Using MLC Flash”
On Monday December 13 SandForce introduced SSD controllers designed specifically for cloud computing applications.
You might wonder what is so different about cloud applications that they need an SSD controller of their own. SandForce makes some interesting points:
- Cloud applications need low latency
- Cloud computing centers, like client SSDs, need a lot of capacity at a very low price Continue reading “SandForce: The Cloud needs Different SSDs”
There’s a lot of “Fear, Uncertainty, and Doubt” – FUD – circulating about SSDs and their penchant for failure. NAND flash wears out after a set number of erase/write cycles, a specification known as the flash’s endurance.
While some caution is warranted, a good understanding of how SSDs really behave will help to allay a lot of this concern. Continue reading “What Happens when SSDs Fail?”