How Controllers Maximize SSD Life – External Data Buffering

Tempus FugitSince NAND flash is weakened by erase/write cycles then it would make sense to try to reduce those cycles to prolong the life of an SSD right?  That’s what external data buffers are designed to do.

There are many ways to use RAM (either a RAM internal to the SSD controller chip or a discrete DRAM chip on the SSD’s printed circuit card) to stage data in a way that will reduce erase/write cycles.

One is to perform a function called “Write Coalescing.”  This involves gathering several short writes to adjacent SSD sectors to turn them into a single long write from the buffer into the NAND flash.  One large write is less taxing to the chip than are several small writes.  It’s also a lot faster.

Here’s an example: In an SSD without a write buffer a number of short writes to the same general area in the drive may not all occur at the same time.  An early write is likely to already be committed to the flash before a later one is received by the SSD.  When the later write is received the flash block that contains both sectors may need  to be reassigned to another block in order to perform the operation, leading to an additional erase/write cycle.

Write coalescing is not the only way a RAM buffer can be used to reduce write traffic to the flash in an SSD.  Another approach involves buffering several successive writes to the same sector, without actually writing to the flash until several such writes have been performed.  Although the system may believe that it is repeatedly overwriting the same NAND sector, in truth the overwritten sector is in RAM until that RAM is needed for some other task.  Several hundred writes to a single sector may turn into a single write to NAND.  This means that in some cases the endurance of the NAND may be increased by a couple of orders of magnitude, which is a pretty good return for a couple of dollars’ worth of RAM.

RAM is also used to assure that writes occur in full page lengths, since this is the away that NAND most efficiently performs a write.  By matching the length of a NAND write to the chip’s page length the buffer helps to reduce write traffic.  Different NAND chips have different page lengths (generally 1-4kB) so the controller has to be informed of the page length of the exact NAND chips used in the SSD.

Any of these approaches creates a risk that data in RAM will be lost during a power failure.  The work-around for this is to provide internal energy storage that can keep the SSD alive long enough to write the RAM’s contents into the NAND for an orderly shut-down.  The power for this process is commonly provided one of three ways – it can be from a:

  1. Battery
  2. Super Capacitor
  3. Bank of Tantalum Capacitors

We won’t join the debate about which of these is best – it’s not the subject of this post.

Still, it’s clear that adding a RAM as temporary storage can do a lot to lengthen flash lifetime well beyond the limit of its endurance specifications.

This post is part of a series published by The SSD Guy in September-November 2012 to describe the leading methods SSD architects use to get the longest life out of an SSD despite the limited number of erase/write cycles that NAND flash specifications guarantee.  The following list provides the names of all of these articles, and hot links to them:

Click on any of the above links to learn about how each of these techniques works.

Alternatively, you can visit the Storage Networking Industry Association (SNIA) website to download the entire series as a 20-page booklet in pdf format.

15 thoughts on “How Controllers Maximize SSD Life – External Data Buffering”

  1. Good evening! I just discovered your site and I’m really enjoying the content! Thanks for your contribution.

    So reading SSD reviews from reviewers like Chris Ramseyer (he writes for Tom’s Hardware and TweakTown) and Anandtech, it seems obvious that consumer-level DRAM-less SSDs suffer from substantially lower endurance and performance, especially if used as an OS drive.

    Is a DRAM-less SSD less reliable? As in are they more likely to fail? I’m curious because I wonder how the controller is able to correct errors or wear the flash evenly without a DRAM buffer, especially if the translation table is stored directly on the NAND flash itself.

    Is a DRAM-less controller (such as the Phison S11 or the SM2258XT) engineered to overcome the inherent weakness of these types of SSDs? I know that some drives can use Host Memory Buffer and/ or store the map in the static SLC cache.

    Thanks in advance!

    Chris

    1. Chris, DRAM-less SSDs use the NAND (as you explained) and the internal SRAM within the controller chi to fulfill those roles that the DRAM would normally serve. One way to look at this is that it’s like the DRAM just got a lot smaller.

      I like to argue from extremes: If there were no DRAM then the SSD would wear very quickly. If the SSD had a DRAM as large as the SSD itself (say 500GB) then there would never be any need to write data to the NAND and so the NAND would last forever. Between these two extremes we have REAL SSDs that have a reasonable amount of DRAM, and their wear is related to the amount of DRAM they have.

      Note that I don’t say “Proportional to” but say “Related to.” This is a big difference. The first few bytes of DRAM dramatically reduce the SSD’s wear, the second few bytes do less, etc. We have diminishing returns.

      One of the challenges faced by SSD architects is to determine how much DRAM is enough. Since workloads vary this becomes even more difficult.

      A DRAM-less SSD should have more than sufficient endurance for low-write applications like Office PCs and archiving. As the write load increases, going up to high write load applications like OLTP and databases, the endurance requirements increase significantly. A DRAM-less SSD would be troublesome in this kind of application.

      With this in mind, major SSD makers offer a range of wear specifications ranging from less than one drive write per day to higher than 25. Naturally they charge more for the higher-endurance SSDs. The onus is on the user to understand what is required for any particular workload. I wish there were better tools for analyzing workloads!

      You mentioned using the DRAM in the host. This is kind of a “Cheat” (and I don’t mean that in a bad sense) that was pioneered by Fusion-io. It’s used by OpenChannel SSDs available from a few vendors. It’s a very inexpensive way to increase the DRAM used by the SSD without including it in the SSD’s base cost. It’s a smart idea, but a lot of users are uncomfortable with it for various reasons that are too long to detail here.

      You might look at another post on The SSD guy: http://TheSSDguy.com/comparing-wear-figures-on-ssds/ This discusses different SSD wear levels and how they are specified.

      Best,

      Jim

      1. Jim, thanks for your response.

        The SRAM in a consumer SSD is typically very small though, right? Is it possible the SRAM would be exhausted during normal OS usage? If so, could the “amplified” writes cause less efficient ECC/ LDPC, especially as the drive fills up?

        Thanks again,

        Chris

        1. Chris,

          The DRAM/SRAM size really hasn’t anything to do with the data integrity of the SSD. It’s more closely related to the endurance. As an SSD ages it doesn’t suffer from increasing errors – it just sees a gradual reduction in the amount of spare NAND on hand to replace blocks that are wearing out.

          It is true that blocks experience a growing number of errors as they wear out, but these errors are all corrected by the SSD’s internal ECC before the data leaves the SSD. The controller will retire a block before its error rate grows to the maximum number of errors that the ECC can correct. Since the SSD has spare blocks, then a spare is used to replace the retired block. If the SSD runs out of spares it will typically go into a “Read Only” mode, but the SSD’s SMART attributes help predict when that will occur, so there’s plenty of warning.

          Users tell me that the SMART attributes on their PCs often indicate that they won’t run out of spare blocks for a very long time, I have heard numbers as high as 30+ years for a client/consumer SSD! (Admittedly, that SSD had a little DRAM.) I would guess that a DRAM-less SSD would last significantly less than 30 years, but it would still probably last longer than the PC would normally be used.

          If you want to be really analytical about it you could always put a DRAM-less SSD into your PC and review its SMART attributes every year. I suspect that you’d find that your worries are unfounded.

          I hope that helps.

          Jim

          1. Jim,

            So in a perfect world scenario, if two SSDs were identical with the exception that one drive excluded the DRAM buffer, the DRAM-less drive wouldn’t be more prone to failure over the course of its life.

            Is it possible that DRAM-less SSDs could be more susceptible to prematurely exhausting the flash? As in is it possible that DRAM-less SSDs exhaust the P/E cycles much faster than expected (before TBW warranty) because for whatever reason (maybe use case, errors, OS requesting tons of random writes) the write amplification is somehow magnified by several multitudes?

            I’m not sure if you frequent Reddit, but there is a subreddit named ‘buildapcsales’ (r/buildapcsales). Within this sub users post SSD sales from retailers. There are a few users that post within the forum that are especially skeptical of DRAM-less SSDs. One user is absolute in his stance that DRAM-less SSDs are not suitable for OS usage in computers where data is important (average users and above) because reliability is a concern:

            https://www.reddit.com/r/buildapcsales/comments/9smy94/ssd_adata_xpg_sx850_25_sata_sm2258_wdram_32l_imft/e8pwbqi

            https://www.reddit.com/r/buildapcsales/comments/9ba78s/ssd_inland_professional_480gb_ssd_3d_nand_sata/e51ir82

            https://www.reddit.com/r/buildapc/comments/adf18n/kingston_a400_ssd/edhdslq

            https://www.reddit.com/r/buildapcsales/comments/9s243s/ssd_intel_660p_2tb_m2_30x4_nvme_sm2263_w256mb/e8lje8n

            This particular user even created a guide:

            https://ssd.borecraft.com/documents/SSD_Buying_Guide.pdf

            Do you think this users skepticism is unfounded or possibly overstated?

            I appreciate you taking the time out of your day to respond and I hope I’m not a bother,

            Chris

          2. Chris,

            You’ve been doing a lot of homework!

            As far as reliability goes, the only concern you really should have is the lifetime of the SSD under your particular workload. Unfortunately, it’s pretty hard to find out a lot about workloads. There aren’t many good tools to measure how many writes your system produces.

            Until an SSD reaches its maximum write endurance it shouldn’t cause you any trouble. They don’t gradually worsen – they fail all at once, and then the symptom is that they go into a read-only mode. Some SSDs will do that when an internal counter determines that the specified TBW has been met, others continue to work as long as there are still spare blocks.

            No matter what, though, if you monitor your SMART attributes then you will always know where you stand against the manufacturer’s specs.

            I certainly wouldn’t worry that some particular workload would cause any SSD to have issues BEFORE its DWPD was used up. You can safely forget the concerns you express in your second paragraph.

            That guide in your last link is impressive, but I would feel a lot more comfortable with it if it weren’t based on so many subjective measures.

            Thanks for sharing all the links.

            Jim

Comments are closed.