Western Digital launched a storage fabric architecture in August that aims to address certain issues that occur with existing storage systems. WDC is furthermore taking a number of steps to try to promote this architecture’s adoption into mainstream storage. It’s an interesting approach that The SSD Guy thought was worth discussion.
The company is working to address the problem of Continue reading “WDC’s Top-to-Bottom Storage Fabric Approach”
Nobody seems to talk about SATA SSDs much anymore, even though there’s still a vibrant market. NVMe is garnering all the attention. Of course, that should come as no surprise. While SATA is an extension of an interface designed around HDDs, NVMe was designed specifically for NAND flash.
Still, lots of Continue reading “Supporting a Dynamic SATA Market”
The m.2 SSD format has become wildly successful in the data center for use as a boot drive and even in SSD arrays. The m.2 format supports either the SATA or the NVMe interface, Something that has been missing, however, is a version of this format for high-availability (HA) systems. These are mission-critical systems that cannot fail, no matter what.
Until today HA systems had to Continue reading “High Availability in an m.2 Format”
Those of you who enjoy listening to podcasts may want to hear Ray Lucchesi (Silverton Consulting) and Keith Townsend (The CTO Advisor) interview The SSD Guy for their series “Greybeards on Storage.”
This interview is the series’ 86th episode covering the world of storage. These guys do a fantastic job of probing this industry with great enthusiasm and insight.
This episode is a 40-minute compendium of the sights and goings-on at the August 2019 Flash Memory Summit along with observations on the industry in general. It’s not strictly structured, and not strictly serious, but just three industry insiders having a lot of fun sharing their observations.
Some of the broad range of subjects that Continue reading “Podcast: Flash Memory Summit 2019”
Although the Trim command has been defined for nearly a decade, for some reason I have never written a post to explain it. It’s time for that to change.
Trim is something that was never required for HDDs, so it was a new command that was defined once SSDs became prevalent. The command is required because of one of those awkward encumbrances that NAND users must accommodate: Erase before write.
NAND flash bits cannot be altered the same way as an HDD. In an HDD a bit that’s currently set to a “1” can be re-written to a “0” and vice versa. Writing a bit either way takes the same amount of time. In NAND flash a 1 can be written to a zero, but the opposite is not the case. Instead, the entire block (4-16k bytes) must be erased at once, after which all bits are set to a 1. Once that has been done then zeros can be written into that block to store data. An erase is an excruciatingly slow operation, taking up to a half second to perform. Writes are faster, but they’re still slow.
Let’s say that a program needs to Continue reading “What is an SSD Trim Command?”
IBM has announced that it is developing Non-Volatile Memory Express (NVMe) solutions to provide significantly lower latency storage.
NVMe is an interface protocol designed to replace the established SAS and SATA interfaces that are currently used for hard drives and SSDs. Coupled with the PCIe hardware backplane, NVMe uses parallelism and high queue depths to significantly reduce delays caused by data bottlenecks and move higher volumes of data within existing flash storage systems.
IBM has set itself to the task of optimizing the entire storage hierarchy, from the applications software to flash storage hardware, and is re-tooling the end-to-end storage stack to support NVMe. The company recognized years ago that both hardware and software would need to be redesigned to satisfy the needs of ultra-low latency data processing.
The company last year released products with Continue reading “IBM Aligns Itself with High Speed NVMe-based Storage”
With the tagline: “Bringing intelligence to storage” start-up NGD Systems, formerly known as NexGen Data, has announced a 24 terabyte SSD that the company claims to be the highest-capacity PCIe/NVMe device available.
The read-optimized Catalina SSD employs a lot of proprietary NGD technology: Variable rate LDPC error correction, unique DSP (digital signal processing) algorithms, and an “Elastic” flash transition layer (FTL), all embodied in an NGD-proprietary controller. This proprietary technology allows Catalina to offer enterprise performance and reliability while using TLC flash and less DRAM than other designs.
NGD claims that the product is already shipping and is being qualified by major OEMs.
Based on some of the company’s presentations at past years’ Flash Memory Summits the controller has been carefully balanced to optimize cost, throughput, and heat. This last is a bigger problem than most folks would imagine. At the 2013 Hot Chips conference a former Violin Memory engineering manager told the audience Continue reading “NGD’s 24TB SSD Is Just The First Step”
Micron has announced a new line of Enterprise SSDs that it has named the 5100 family. The three members of the family are designated by different suffixes: 5100 ECO, 5100 PRO, and 5100 MAX, as listed in the table below.
The three models support the same maximum read IOPS performance, but have a wide range of write IOPS figures, endurance (measured in DWPD = Drive Writes per Day), and maximum capacities.
All of these SSDs are based on Micron’s 3-bit 3D NAND. Micron has been aggressively ramping its 3D NAND technology since it began shipments in earnest last June.
The three SSD models are designed using the same fundamental firmware architecture, which Micron has named FlexPro, to yield consistent performance and reliability across the family, and with the hopes that customers will be able to qualify all three models in a single effort, which would provide one more reason for users to source their Continue reading “Micron Unveils New 5100 Enterprise SSDs”
Something that has been confusing a number of people is the performance of Intel’s 3D XPoint-based SSDs. Why are they so slow?
Let me back up a little – they’re not really slow. When Intel compared its standard NAND flash based PCIe SSD to a similar SSD based on 3D XPoint memory, the XPoint model ran 7-8 times faster, which is very impressive. Intel demonstrated that at the Intel Developer Forum (IDF) last August and several times since then.
But Intel and Micron have been boasting since its introduction that 3D XPoint Memory is 1,000 times as fast as NAND flash. How do you get from a 1,000 times speed advantage down to a speed improvement of only 7-8 times?
That’s what the graphic in this post will explain. The small rendition above is just Continue reading “Why 3D XPoint SSDs Will Be Slow”
The SSD Guy was recently asked whether HDDs would continue, at least through 2019, to remain preferable to SSDs as cost-effective high-capacity storage. The answer was “Yes”.
Longtime readers will note that I steadfastly maintain that HDD and SSD gigabyte prices are unlikely to cross for a very long time. Historically, a gigabyte of NAND flash has cost between ten to twenty times as much as a gigabyte of HDD. Let’s look at where Objective Analysis expects things to go by 2019.
Our current projections call for NAND price per gigabyte to reach 4.4 cents in 2019. I would expect for HDD to still be 1/10th to 1/20th of that price. Most likely 1/10th, since we expect for NAND flash to be in a significant oversupply at that time and will be selling at cost.
If HDD prices continue to hover around $50, then a 2019 HDD price of 0.44 to 0.22 cents per gigabyte (1/10th to 1/20th of the price of NAND flash) would imply an average HDD capacity of 11-23TB.
A couple of weeks ago, on December 2, 2015, Western Digital’s HGST introduced its Continue reading “Is an HDD/SSD Price Crossover Coming Soon?”