There’s an idea that has been kicking around for a number of years, and it seems now to be gaining traction. The idea is to use the inherent smarts and high available bandwidth within an SSD to perform functions that would normally be done by a server’s processor thereby reducing the load on the processor while minimizing the amount of data that needed to make a round trip from the SSD to the processor and back for some trivial function.
Such data movement is said to consume a very Continue reading “Computational Storage Hits the Mainstream”
There have been numerous changes to SSDs since they moved into the mainstream 15 years ago, with controllers providing increasing, then decreasing endurance levels, and offering greater, then lesser levels of autonomy. What has been missing is any ability for the system to determine the level of performance that the SSD provides.
Recently Kioxia, the company formerly known as Toshiba Memory, announced Continue reading “What’s Software-Enabled Flash?”
Fadu, a startup out of Korea, made a big splash at the Flash Memory Summit to announce its new NVMe SSD controllers that don’t compromise speed to achieve low-power operation.
The company’s products are focused on quality of service (QOS) in enterprise-style 24/7 workloads with the aim of enabling the transition to NVMe in Enterprise and Hyperscale data centers, the fastest-growing segments in the SSD market. Some readers may recall that Fadu won the 2018 FMS Best-of-Show award in the “Most Innovative Flash Memory Technology” category for an earlier generation of products.
The company’s founding team comes from Samsung and Hynix with a CEO (Jihyo Lee) from Bain Capital. Lee gave a keynote address at the Flash Memory Summit simply titled: “Enterprise SSD: The Future”
The new SSD controller, Annapurna, is a Continue reading “Start-Up Fadu Launches New SSD Controller”
Although the Trim command has been defined for nearly a decade, for some reason I have never written a post to explain it. It’s time for that to change.
Trim is something that was never required for HDDs, so it was a new command that was defined once SSDs became prevalent. The command is required because of one of those awkward encumbrances that NAND users must accommodate: Erase before write.
NAND flash bits cannot be altered the same way as an HDD. In an HDD a bit that’s currently set to a “1” can be re-written to a “0” and vice versa. Writing a bit either way takes the same amount of time. In NAND flash a 1 can be written to a zero, but the opposite is not the case. Instead, the entire block (4-16k bytes) must be erased at once, after which all bits are set to a 1. Once that has been done then zeros can be written into that block to store data. An erase is an excruciatingly slow operation, taking up to a half second to perform. Writes are faster, but they’re still slow.
Let’s say that a program needs to Continue reading “What is an SSD Trim Command?”
The Storage Developer Conference in September gave a rare glimpse into two very different directions that SSD architectures are pursuing. While some of the conference’s presentations touted SSDs with increasing processing power (Eideticom, NGD, Samsung, and ScaleFlux) other presentations advocated moving processing power out of the SSD and into the host server (Alibaba, CNEX, and Western Digital).
Why would either of these make sense?
A standard SSD has a very high internal bandwidth that encounters a bottleneck as data is forced through a narrower interface. It’s easy to see that an SSD with 20+ NAND chips, each with an 8-bit interface, could access all 160 bits simultaneously. Since there’s already a processor inside the SSD, why not open it to external programming so that it can perform certain tasks within the SSD itself and harness all of that bandwidth?
Example tasks would include Continue reading “SSDs Need Controllers with More, NO! Less Power”
Once again The SSD Guy will be playing a part in the annual Storage Visions conference which has been moved this year to the Santa Clara Hyatt Hotel adjacent to the Santa Clara Convention Center. It’s now a 2-day conference (October 22-23) and has an agenda packed with interesting subjects, speakers, and panelists.
Storage Visions’ mission is to bring together the vendors, end users, researchers and visionaries that will meet growing demand for digital storage for the “coming data tsunami.”
I will moderate a panel on an exciting new technology that is currently known by a few different names, including “In-Situ Processing,” “Computational Storage,” and “Intelligent SSDs” (iSSD). It’s a kind of SSD that uses internal processing to reduce the amount of data traffic between the server and storage. This helps get past an issue that plagues many applications which spend more time and energy moving data back and forth than they do actually processing that data.
The panel, at 8:15 Monday morning, October 22, is Continue reading “Storage Visions Conference Coming Oct 22”
A new and highly-efficient error correction scheme has recently been revealed by a joint university research team. The SSD Guy has learned that this largely-overlooked research, performed by a cross-university team from University of North by Northeast Wales in the UK (UN-NeW) and Poland’s Trzetrzelewska University, could bring great economies to SSD manufacturers and all-flash array (AFA) companies.
Dr. Peter Llanfairpullguryngyllgogeryohuryrndrodullllantysiliogogogoch of UN-NeW, who generally shortens his name to Llanfairpullguryngyll and Dr. Agnieszka Włotrzewiszczykowycki of Trzetrzelewska University have determined that today’s more standard ECC engines can be dramatically improved upon to both increase available storage for a given price while accelerating throughput. This is achieved through the use of new and highly complex algorithms that differ radically from current ECC approaches that are simply linear improvements upon past algorithms.
According to Dr. Włotrzewiszczykowycki: “The beauty of semiconductors is that Moore’s Law not only allows Continue reading “Extreme ECC Enables Big SSD Advances”
With the tagline: “Bringing intelligence to storage” start-up NGD Systems, formerly known as NexGen Data, has announced a 24 terabyte SSD that the company claims to be the highest-capacity PCIe/NVMe device available.
The read-optimized Catalina SSD employs a lot of proprietary NGD technology: Variable rate LDPC error correction, unique DSP (digital signal processing) algorithms, and an “Elastic” flash transition layer (FTL), all embodied in an NGD-proprietary controller. This proprietary technology allows Catalina to offer enterprise performance and reliability while using TLC flash and less DRAM than other designs.
NGD claims that the product is already shipping and is being qualified by major OEMs.
Based on some of the company’s presentations at past years’ Flash Memory Summits the controller has been carefully balanced to optimize cost, throughput, and heat. This last is a bigger problem than most folks would imagine. At the 2013 Hot Chips conference a former Violin Memory engineering manager told the audience Continue reading “NGD’s 24TB SSD Is Just The First Step”
SSDs use a huge number of internal parameters to achieve a tricky balance between performance, wear, and cost. The SSD Guy likes to compare this to a recording studio console like the one in this post’s graphic to emphasize just how tricky it is for SSD designers to find the right balance. Imagine trying to manage all of those knobs! (The picture is JacoTen’s Wikipedia photo of a Focusrite console.)
Vendors who produce differentiated SSDs pride themselves in their ability to fine-tune these parameters to achieve better performance or endurance than competing products.
About a year ago I suggested to the folks at NVMdurance that they might consider applying their machine learning algorithm to this problem. (The original NVMdurance product line was described in a Memory Guy post a while ago.) After all, the company makes a machine learning engine that tunes the numerous internal parameters of a NAND flash chip to extend the chip’s life while maintaining the specified performance. SSD management would be a natural use of machine learning since both SSDs and NAND flash chips currently use difficult and time-consuming manual processes to find the best mix of parameters to drive the design.
Little did I know that NVMdurance’s researchers Continue reading “Managing SSDs Using Machine Learning”
Yesterday IBM unveiled a sweeping update of its existing flash storage products. These updates cover a range of products, including IBM Storwize All Flash arrays: V7000F, V7000 Gen2+, and V5030F, the FlashSystem V9000, the IBM SAN Volume Controller (SVC), and IBM’s Spectrum Virtualize Software.
The company referred to this effort as a part of a: “Drumbeat of flash storage announcements.” IBM has a stated goal of providing its clients with: “The right flash for the right performance at the right price.”
IBM’s representatives explained that the updates were made possible by the fact that the prices of flash components have been dropping at a rapid pace while reliability is on the rise. The SSD Guy couldn’t agree more.
Here’s what IBM announced:
Starting from the low end and moving up, the V5030F entry-level/midrange array is an Continue reading “IBM Refreshes Broad Swath of Flash Offerings”