Storage Class Memory

Does Persistent Memory Improve Performance? Ask Oracle!

A model-by-model timeline of Oracle's Exadata product introductions with key specifications.At last month’s SNIA Persistent Memory Summit Oracle presenter Jia Shi, Sr. Director of Exadata Development, shared some statistics on the Exadata system’s history over the past ten years.  (Click on the graphic to the left to see the timeline.)  The speaker highlighted the fact that the system’s I/O performance has grown from 0.05 million IOPS ten years ago to 16 million IOPS today, a 320X improvement!  Shi said that Exadata was designed to be “the ideal database hardware with smart system software and automated management.”  There’s every reason for her to be proud of her own work with this product!

The most recent iteration of the system, X8M, released last September, takes advantage of Persistent Memory (PM) in the  form of Intel’s new Optane DIMMs (formally called “The Intel Optane DC Persistent Memory module”).  The presenter said she was diligently working on this new approach at this time last year – so diligently, in fact, that she was unable to attend the 2019 Persistent Memory Summit even though she was working on a pioneering implementation of PM technology!

While the timeline in this post’s graphic doesn’t Continue reading

SNIA Webcast: Emerging Memories

This shows the cover slide for the SNIA webcast presentation titled "What a Year it Was and Where We Need To Go in Emerging Memory"On Tuesday, January 14, Tom Coughlin and I were featured in a BrightTalk webinar hosted by the Storage Networking Industry Association (SNIA).  A recording of this webinar has been posted so that you can view it at your convenience.

This webinar looks at emerging memories and where they now stand, giving a glance at the ground that has yet to be covered before these new memories gain widespread acceptance as persistent memory in general-purpose computing.

The content of the presentation is excerpted from a 172-page report jointly published by Objective Analysis and Coughlin Associates that covers the gamut of emerging memory technologies, the companies involved (49 of them!), and predicts how the market for these new memories should develop with forecasts for the memories, as well as for the equipment used to manufacture them.  This is the 2019 update of a very well-received report originally published in 2017.

The webinar is a little less than an hour long, and about half of it consists of audience questions which we address in the second half.

You can view and listen to it by clicking HERE.

And, if you’re interested in the report, you can purchase a copy for immediate download from the Objective Analysis website by clicking HERE.

 

Where does NVRAM Fit?

AGIGARAM DDR4 NVDIMM (Photo Courtesy of AgigA Tech)There’s been a lot of interest in NVRAM recently.  This technology has been lurking in the background for decades, and suddenly has become very popular.

What is NVRAM?  Quite simply, it’s DRAM or SRAM that has a back-up flash memory a small controller, and a battery or super-capacitor.  During operation the DRAM or SRAM is used in a system the same way that any DRAM or SRAM would be used.  When power is interrupted the controller moves all of the data from the DRAM or SRAM to the flash using the backup power from the battery or super-capacitor.  When power is restored, the controller moves the contents of the flash back into the SRAM or DRAM and the processor can resume operation where it left off.

In some ways it’s storage and in some ways it’s memory, so Continue reading

White Paper: Matching Flash to the Processor

Moving flash into the memory channel to get fast parallel performance I have just added a new white paper onto the Objective Analysis website: Matching Flash to the Processor – Why Multithreading Needs Parallelized Flash.

This document examines the evolution of today’s CPUs, whose clock frequencies have stopped increasing, but now exploit parallelism to scale performance.  Multiple DRAM channels have also been added to performance computing to add parallelism to the memory channel.

Storage hasn’t kept pace with this move to parallelism and that is limiting today’s systems.

New NAND flash DIMMs recently introduced by Diablo, SanDisk, and IBM, provide a reasonable approach to adding parallel flash to a system on the its fastest bus – the memory channel.  This white paper shows that storage can be scaled to match the processor’s growing performance by adding flash DIMMs to each of the many DRAM buses in a performance server.

The white paper is downloadable for free from the Objective Analysis home page.  Have a look.