Jim Handy

NGD’s New “In-Situ Processing” SSD

NGD In Situ graphicStart-up NGD Systems (formerly NxGenData) has just announced the availability of an SSD with in situ processing – that is, the SSD can actually process data rather than simply store it.  The new “Catalina 2” SSD is said to have the ability to run advanced applications directly on the drive.

NGD tells us that the SSD, which comes in both U.2 and AIC (PCIe add-in card) formats, is currently available for purchase.

If your memory is long enough you may recall that The SSD Guy wrote a post four years ago about something like this.  At the 2013 Flash Memory Summit Micron Technology delivered a keynote detailing a research project in which they reprogrammed SSDs so that each SSD in a system could perform basic database management functions.

Although Micron demonstrated significant advantages of using of this approach, nobody, not even Micron, has followed through with a product until now.

NGD briefed me and explained that the data explosion expected with the Internet of Things will not Continue reading

An NVDIMM Primer (Part 1 of 2)

NVDIMMs are gaining interest lately, so The SSD Guy thought it might be worthwhile to explain both what they are and how NVDIMM nomenclature works.

As I was writing it I noticed that the post got pretty long, so I have split it into two parts.  The first part explains what an NVDIMM is and defines the names for today’s three kinds of NVDIMM.  The second part tells about software changes used to support NVDIMMs in BIOS, operating systems, and even processor instruction sets.  It also discusses the problem of security.

In case the name is unfamiliar, NVDIMM stands for “Nonvolatile Dual-Inline Memory Module.”  Standard computer memory – DRAM – is inserted into the system in the DIMM form factor, but DRAM loses its data when power is removed.  The NVDIMM is nonvolatile, or persistent, so its data remains intact despite a loss of power.  This takes some effort and always costs more for reasons that will be explained shortly.

Although might seem a little odd to discuss memory in a forum devoted to SSDs, which are clearly storage, the NVDIMM is a storage device, so it rightly Continue reading

IBM Aligns Itself with High Speed NVMe-based Storage

NVMe LogoIBM has announced that it is developing Non-Volatile Memory Express (NVMe) solutions to provide significantly lower latency storage.

NVMe is an interface protocol designed to replace the established SAS and SATA interfaces that are currently used for hard drives and SSDs. Coupled with the PCIe hardware backplane, NVMe uses parallelism and high queue depths to significantly reduce delays caused by data bottlenecks and move higher volumes of data within existing flash storage systems.

IBM has set itself to the task of optimizing the entire storage hierarchy, from the applications software to flash storage hardware, and is re-tooling the end-to-end storage stack to support NVMe. The company recognized years ago that both hardware and software would need to be redesigned to satisfy the needs of ultra-low latency data processing.

The company last year released products with Continue reading

Comparing Wear Figures on SSDs

DWPD TBW GB/Day TriangleI have been receiving questions lately from people who are puzzled when companies use different parameters than their competitors use to specify the endurance of their SSDs.  How do you compare one against the other?  Some companies even switch from one parameter to another to define the endurance of different SSDs within their product line.

I have found that Intel uses three different endurance measures for its products: DWPD (drive writes per day), TBW (terabytes written), and GB/day.

There’s not any real difference between any of these measures – each one is one way of stating how many times each of the SSD’s locations can be overwritten before the drive has gone past its warrantied life.

The relationships between these three measures are illustrated in this post’s graphic.  You can click on it to see an expanded version.  It’s all pretty simple.  We’ll spell out the relationships in detail below, but in brief, if you want to compare Continue reading

Extreme ECC Enables Big SSD Advances

Combined University Seals Trzetrzelewska Univerity & UN-NeWA new and highly-efficient error correction scheme has recently been revealed by a joint university research team.  The SSD Guy has learned that this largely-overlooked research, performed by a cross-university team from University of North by Northeast Wales in the UK (UN-NeW) and Poland’s Trzetrzelewska University, could bring great economies to SSD manufacturers and all-flash array (AFA) companies.

Dr. Peter Llanfairpullguryngyllgogeryohuryrndrodullllantysiliogogogoch of UN-NeW, who generally shortens his name to Llanfairpullguryngyll and Dr. Agnieszka Włotrzewiszczykowycki of Trzetrzelewska University have determined that today’s more standard ECC engines can be dramatically improved upon to both increase available storage for a given price while accelerating throughput.  This is achieved through the use of new and highly complex algorithms that differ radically from current ECC approaches that are simply linear improvements upon past algorithms.

According to Dr. Włotrzewiszczykowycki: “The beauty of semiconductors is that Moore’s Law not only allows Continue reading

Intel Pits Optane SSDs Against NAND SSDs

Intel's Optane PyramidOnly a week after announcing its Optane Enterprise SSDs Intel has launched m.2-format Optane SSDs for end users.  It appears that we are at the onset of an Optane surge.

These SSDs communicate over the PCIe bus bringing more of the 3D XPoint’s performance to the user than would a SATA interface.

Pricing is $44 for a 16GB module and $77 for 32GB.  That’s $2.75 and $2.40 (respectively) per gigabyte, or about half the price of DRAM.  Intel says that these products will ship on April 24.

What’s most interesting about Intel’s Optane pitch is that the company appears to be telling the world that SSDs are no longer important with its use of the slogan: “Get the speed, keep the capacity.” This message is designed to directly address the quandary that faces PC buyers when considering an SSD: Do they want an SSD’s speed so much that they are willing to accept either Continue reading

NGD’s 24TB SSD Is Just The First Step

NGD LogoWith the tagline: “Bringing intelligence to storage” start-up NGD Systems, formerly known as NexGen Data, has announced a 24 terabyte SSD that the company claims to be the highest-capacity PCIe/NVMe device available.

The read-optimized Catalina SSD employs a lot of proprietary NGD technology: Variable rate LDPC error correction, unique DSP (digital signal processing) algorithms, and an “Elastic” flash transition layer (FTL), all embodied in an NGD-proprietary controller.  This proprietary technology allows Catalina to offer enterprise performance and reliability while using TLC flash and less DRAM than other designs.

NGD claims that the product is already shipping and is being qualified by major OEMs.

Based on some of the company’s presentations at past years’ Flash Memory Summits the controller has been carefully balanced to optimize cost, throughput, and heat.  This last is a bigger problem than most folks would imagine.  At the 2013 Hot Chips conference a former Violin Memory engineering manager told the audience Continue reading

Intel Announces Optane SSDs for the Enterprise

Intel-Optane-SSDThis week Intel announced the Optane SSD DC P4800X Series, new enterprise SSDs based on the company’s 3D XPoint memory technology which Intel says is the first new memory technology to be introduced since 1989.  The technology was introduced to fill a price/performance gap that might impede Intel’s sales of high-performance CPUs.

Intel was all aglow with the promise of performance, claiming that the newly-released SSDs offer: “Consistently amazing response time under load.”

Since the early 1990s Intel has realized that it needs for the platform’s performance to keep pace with the ongoing performance increases of its new processors.  A slow platform will limit the performance of any processor, and if customers don’t see any benefit from purchasing a more expensive processor, then Intel will be unable to keep its processor prices high.

Recently NAND flash SSDs have helped Intel to improve the platform’s speed, as did the earlier migration of Continue reading

Managing SSDs Using Machine Learning

Focusrite Recoring Console - photo by JacoTenSSDs use a huge number of internal parameters to achieve a tricky balance between performance, wear, and cost.  The SSD Guy likes to compare this to a recording studio console like the one in this post’s graphic to emphasize just how tricky it is for SSD designers to find the right balance.  Imagine trying to manage all of those knobs!  (The picture is JacoTen’s Wikipedia photo of a Focusrite console.)

Vendors who produce differentiated SSDs pride themselves in their ability to fine-tune these parameters to achieve better performance or endurance than competing products.

About a year ago I suggested to the folks at NVMdurance that they might consider applying their machine learning algorithm to this problem.  (The original NVMdurance product line was described in a Memory Guy post a while ago.)  After all, the company makes a machine learning engine that tunes the numerous internal parameters of a NAND flash chip to extend the chip’s life while maintaining the specified performance.  SSD management would be a natural use of machine learning since both SSDs and NAND flash chips currently use difficult and time-consuming manual processes to find the best mix of parameters to drive the design.

Little did I know that NVMdurance’s researchers Continue reading

Latency, IOPS & NVDIMMs

Latency vs IOPS Persectives - PowerPointSometimes it’s enlightening to compare several viewpoints on similar data.  At yesterday’s SNIA Persistent Memory Summit a number of presentations provided interesting overlapping views on certain subjects.

One of particular interest to The SSD Guy was latency vs. IOPS.  Tom Coughlin of Coughlin Associates and I presented the findings from our recently-published IOPS survey report and in Slide 19 displayed the basic chart behind this post’s graphic (click to enlarge, or, better yet, right-click to open in a new tab). This chart compares how many IOPS our respondents said they need for the storage in their most important application, and compared that to the latency they required from this storage.  For comparison’s sake we added a reference column on the left to roughly illustrate the latency of various standard forms of storage and memory.

You can see that we received a great variety of inputs spanning a very wide range of IOPS and latency needs, and that these didn’t all line up neatly as we would have anticipated.  One failing of this chart format is that it doesn’t account for multiple replies for the same IOPS/latency combination: If we had been able to include that the chart would have shown a clearer trendline running from the top left to the lower right.  Instead we have a band that broadly follows that trend of upper-left to lower-right.

Two other speakers presented the IOPS and latency that could be Continue reading