The SSD Guy has often explained to readers that the storage industry is caught between two alternatives: fast and costly, or cheap and slow. This is the key difference between SSDs and HDDs. I have recently learned of a new secret government research effort, code named “SiliDisk,” that will provide the best of both worlds by marrying flash memory with the mechanics of an HDD.
The approach is incredibly ingenious, while remaining deceptively simple: All that is required is to replace the disks in an HDD with the wafers used to manufacture NAND flash. Both are round, so there’s little engineering effort to switch from a magnetic disk to a flash wafer.
The NAND flash on the wafer is almost completely standard. The only two changes are that the chips aren’t scribed or sawn apart, saving a small sum, but a hole must be etched through the center (which can be seen in the photo below) offsetting this savings. The HDD mechanisms are unchanged with one exception: While today’s HDDs are largely manufactured using 2.5″ and 3.5″ platters (65mm & 90mm), NAND flash is exclusively produced on 300mm wafers. This means that Continue reading “HDD & SSD Combined Into One”
This post is the second part of a four part series in The SSD Guy blog to help explain Intel’s two recently-announced modes of accessing its Optane DIMM, formally known as the “Intel Optane DC Persistent Memory.”
The most difficult thing to understand about the Intel Optane DC Persistent Memory when used in Memory Mode is that it is not persistent. Go back and read that again, because it didn’t make any sense the first time you read it. It didn’t make any sense the second time either, did it?
Although the Trim command has been defined for nearly a decade, for some reason I have never written a post to explain it. It’s time for that to change.
Trim is something that was never required for HDDs, so it was a new command that was defined once SSDs became prevalent. The command is required because of one of those awkward encumbrances that NAND users must accommodate: Erase before write.
NAND flash bits cannot be altered the same way as an HDD. In an HDD a bit that’s currently set to a “1” can be re-written to a “0” and vice versa. Writing a bit either way takes the same amount of time. In NAND flash a 1 can be written to a zero, but the opposite is not the case. Instead, the entire block (4-16k bytes) must be erased at once, after which all bits are set to a 1. Once that has been done then zeros can be written into that block to store data. An erase is an excruciatingly slow operation, taking up to a half second to perform. Writes are faster, but they’re still slow.
Intel recently announced two operating modes for the company’s new Optane DIMMs, formally known as “Intel Optane DC Persistent Memory.” The company has been trying to help the world to understand these two new operating modes but they are still pretty baffling to most of the people The SSD Guy speaks to. Some say that the concepts make their heads want to explode!
How does Optane’s “Memory Mode” work? How does “App Direct” Mode work? In this four-part series will try to provide some answers.
Like all of my NVDIMM-related posts, this series challenges me with the question: “Should it be published in The SSD Guy, or in The Memory Guy?” This is a point of endless confusion for me, since NVDIMM and Intel’s Optane blur the lines between Memory and Storage. I have elected to post this in The SSD Guy with the hope that it will be found by readers who want to understand Optane for its storage capabilities.
Memory Mode is the easy sell for the short term. It works with all current application software without modification. It just makes it look like you have a TON of DRAM.
This model is enormously important to the future of computing, yet few people even know that it exists. It’s a fundamental change to the way that application programs access storage that will have significant ramifications to computer architecture and performance over the long term.
Here’s why: The industry is moving towards larger-scale systems that mix persistent memory with standard DRAM into a single memory address space. Persistent memory has an advantage over volatile DRAM, since it maintains data after power is removed or lost. Because of this certain application programs will want to know which memory is volatile and which is persistent and to take advantage of whatever persistent memory the system might provide. I say “Larger-Scale” systems because small systems often combine Continue reading “What is SNIA’s Persistent Memory Programming Model?”
A recent Storage Newsletter article argues that SSD prices are approaching HDD prices, and that the gap has narrowed to only a 2.7 times difference.
Upon closer inspection, though, the reader will note that this is only true at lower capacities. The narrowing price gap at lower capacities has always existed in this market. The SSD Guy was making that argument back in 2007!
This post’s graphic shows a chart from the first report ever published by Objective Analysis over a decade ago: The Solid State Disk Market – A Rigorous Look.
The point of this chart was to illustrate that, at low capacities, SSDs are cheaper, while at higher capacities HDDs provide lower-priced storage.
Data centers that use centralized storage, SANs or NAS, sometimes use servers to cache stored data and thus accelerate the average speed of storage. These caching servers sit on the network between the compute servers and storage, using a program called memcached to replicate a portion of the data stored in the data center’s centralized storage. Under this form of management more-frequently-used data presents itself faster since it has been copied into a very large DRAM in the memcached server.
Such systems have been offset over the past five or more years thanks to the growing availability of high-speed enterprise SSDs at an affordable price. Often direct-attached storage (DAS) in the form of an SSD within each server can be used to accelerate throughput. This can provide a considerable cost/performance benefit over the memcached approach since DRAM costs about 20 times as much as the flash in an SSD. Even though the DRAM chips within the memcached server run about three orders of magnitude faster than a flash SSD most of that speed is lost because the DRAM communicates over a slow LAN, so the DAS SSD’s performance is comparable to that of the memcached appliance.
The Storage Developer Conference in September gave a rare glimpse into two very different directions that SSD architectures are pursuing. While some of the conference’s presentations touted SSDs with increasing processing power (Eideticom, NGD, Samsung, and ScaleFlux) other presentations advocated moving processing power out of the SSD and into the host server (Alibaba, CNEX, and Western Digital).
Why would either of these make sense?
A standard SSD has a very high internal bandwidth that encounters a bottleneck as data is forced through a narrower interface. It’s easy to see that an SSD with 20+ NAND chips, each with an 8-bit interface, could access all 160 bits simultaneously. Since there’s already a processor inside the SSD, why not open it to external programming so that it can perform certain tasks within the SSD itself and harness all of that bandwidth?
Once again The SSD Guy will be playing a part in the annual Storage Visions conference which has been moved this year to the Santa Clara Hyatt Hotel adjacent to the Santa Clara Convention Center. It’s now a 2-day conference (October 22-23) and has an agenda packed with interesting subjects, speakers, and panelists.
Storage Visions’ mission is to bring together the vendors, end users, researchers and visionaries that will meet growing demand for digital storage for the “coming data tsunami.”
I will moderate a panel on an exciting new technology that is currently known by a few different names, including “In-Situ Processing,” “Computational Storage,” and “Intelligent SSDs” (iSSD). It’s a kind of SSD that uses internal processing to reduce the amount of data traffic between the server and storage. This helps get past an issue that plagues many applications which spend more time and energy moving data back and forth than they do actually processing that data.
This interview is their 70th episode covering the world of storage. These guys do a fantastic job of probing this industry with great enthusiasm and insight.
This episode is a 42-minute compendium of the sights and goings-on at last August’s Flash Memory Summit along with a number of side trips into the world of SSDs and memory chips. It’s not strictly structured, and not strictly serious, but just three industry insiders having a lot of fun sharing their observations.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.